You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
\section{Fault detection and isolation on steel plates}
2
2
3
3
\begin{frame}
4
-
TO DO
4
+
\frametitle{Dataset description}
5
+
The steel plates faults dataset comes from research by Semeion, Research Center of Sciences of Communication. The aim of the research was to correctly classify the type of surface defects in stainless steel plates. Below is some information about the dataset:
6
+
\begin{itemize}
7
+
\item number of fault classes: $6 + 1$ (no faults);
8
+
\item number of attributes: $27$;
9
+
\item number of instances: $1941$;
10
+
\item absence of missing values.
11
+
\end{itemize}
12
+
Unfortunately, no further details on the covariates are available.
Copy file name to clipboardExpand all lines: Presentation/Sections/PLS algorithm.tex
+18-3Lines changed: 18 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -10,6 +10,21 @@ \section{Description of the PLS algorithm}
10
10
The first algorithm is more accurate than the other, however it requires more computational time than PLS2 to find the $\alpha$ eigenvectors into which project the \textit{m} covariates.
11
11
\end{frame}
12
12
13
+
\begin{frame}
14
+
\frametitle{Data structures}
15
+
Before starting with the description of the algorithm, we recall that:
16
+
\begin{itemize}
17
+
\item the matrix $X \in\mathbb{R}^{n\times m}$ is decomposed into a \textbf{score matrix} $T \in\mathbb{R}^{n\times\alpha}$ and a \textbf{loading matrix} $P \in\mathbb{R}^{m\times\alpha}$ such that $X = \hat{X} + E = T\cdot P^\top + E$, where $E \in\mathbb{R}^{n\times m}$ is the (true) \textbf{residual} matrix for $X$;
18
+
\item the matrix $Y \in\mathbb{R}^{n\times p}$ is decomposed into a \textbf{score matrix} $U\in\mathbb{R}^{n\times\alpha}$ and a \textbf{loading matrix} $Q\in\mathbb{R}^{p\times\alpha}$ such that $Y = \hat{Y} + \widetilde{F} = U\cdot Q^\top + \widetilde{F}$, where $\widetilde{F}\in\mathbb{R}^{n\times p}$ is the (true) \textbf{residual matrix} for $Y$.
19
+
\item the matrix $B\in\mathbb{R}^{\alpha\times\alpha}$ is the \textbf{diagonal regression matrix} such that $\hat{U} = T\cdot B$.
E = E - t*p'; \textcolor{green}{% update of the residuals for matrix X}
66
+
F = F - b*t*q'; \textcolor{green}{% update of the residuals for matrix Y}
52
67
\end{Verbatim}
53
68
\end{frame}
54
69
@@ -62,6 +77,6 @@ \section{Description of the PLS algorithm}
62
77
\textcolor{blue}{end}
63
78
Y_hat = X*B2; \textcolor{green}{% computation of predictions}
64
79
\end{Verbatim}
65
-
For each row of \verb|Y_hat| the fault class is chosen by assigning $1$ to the column whose value si greater than that of the others, $0$ otherwise. Moreover, to increase the performances of PLS it is necessary \textbf{normalize} both $X$ and $Y$ before running the algorithm.
80
+
For each row of \verb|Y_hat| the fault class is chosen by assigning $1$ to the column whose value si greater than that of the others, $0$ otherwise. \\Moreover, to increase the performances of PLS it is necessary \textbf{normalize} both $X$ and $Y$ before running the algorithm.
0 commit comments