padraic.xyz

Dimensionality Reduction

A common task in data analysis is to try and identify a lower dimensional manifold in a high dimensional data space, such that the features of the data are almost entirely captured by the low-dimensional manifold. Formally, we seek a vector for each data element that parametrises the location of on the manifold.

Linear Algebra Introduction

Our -dimensional data points can be collected into an real-valued matrix, . We will assume for the following that the data has been 'centered', namely .

We define two related symmetric matrices which capture aspects of the data distribution; the Scatter matrix, , and the Gram matrix, .

Both matrices are real, symmetric and positive semi-defifnite. This gives us some useful properties if we consider their eigenvalue/eigenvector and singular value decompositions. Recall that for a symmetric matrix the eigenvectors form an orthonormal basis. The symmetric matrix can then be decomposed as , where is the matrix with eigenvectors as each column, and the diagonal matrix of eigenvalues.

A generalisation of this beyond square matrices is the Singular Value Decomposition. In this cae, we can decompose an matrix into an orthonormal matrix, , a basis of the columns, a diagonal matrix of singular values, , and an orthonormal basis over the rows, the matrix . Using the singular value decomposition, we can thus write Here, the eigenvectors of are given by the right singular vectors of , and the eigenvectors of are the left singular vectors of .

An important property of the is that if we take only the first singular values and left and right vectors, we obtain the best rank- approximation to (assuming least-squares loss). Namely, we minimise the error _{ij})^{2}] with , and .

A corollary of this result is that the given a matrix , then the error function _{ij})^{2}], is minimised for the matrix P by taking

Principal Component Analysis

We want to fibd a mnatrix that projects a data element on to a manifold vector . Lets assume a linaer approach, namely (\mathbf{y}=P^{T}\mathbf{x}) for an real-matrix . This linear mapping preserves local and global structure of the data. Our goal is to pick a that captures most of the variance of the data.

The variance in a given direction is captured by the decomposition of that vector into eigenvalues. Accordingly, we can capture a good percentage of the variance by choosing the first eigendirections, trunctating specifically at a point where we feel we have capture sufficient variance that the remaining eigennvalues we exclude mostly capture the effects of noise. In this case, the eigenspectrum adopts a shape.

Our matrix is then built up as matrix with the first eigenvectors of as its columns, sorted in descending eigenvalue. The manifold coordinates is given by , and the hyperplane projection by .

Variance Partitioning

PCA can be interpreted as splitting the data in to an 'in-manifold' signal, and the 'out-of-manifold' noise. In this case, we assume the noise can be seperate in to a few dimensions, whereas real measurement noise is likely to be isotropic in all components. There are different extensions of PCA that try to tackle this in different ways.

The first is to include istropic gaussian noise, and to optimise the PCA by estimating its scale. This is called probabilistic PCA (pPCA).

Another alternative, called Factor Analysis, assumes independent gaussian noise along each measured dimension. This is sensible for exmple when the measured quantities are in difference units.