Linear Representations and Basis Vectors

From SCCN
Jump to: navigation, search

This page describes basic linear algebra concepts related to linear representations in vector spaces.

Contents

Basis Vectors

Originally we are given the recorded data in the channel space, say with channels, and samples (i.e. time points, frames). The data can be thought of as a collection of vectors in -dimensional space, each of which in the case of EEG is a snapshot of the electric potential at the electrodes (relative to a given reference) at a particular time point.

The data can also be thought of as a collection of time series, or channel vectors, in -dimensional space; or as a collection of spatiotemporal data segments (each e.g. an matrix) in -dimensional space. As we are concerned here with instantaneous ICA, we'll primarily think of the data as a set of vectors in -dimensional space, disregarding the temporal order of the vectors.

ICA is a type of linear representation of data in terms of a set of "basis" vectors. Since we're working here in channel space, the vectors we're interested in will be in . To illustrate in the following we'll use a three dimensional example, say recorded using three channels. The data then is given to us in three dimensional vector space.

R3 2.png

Each of these data points is a vector in three dimensional space.

In general, any point in -dimensional space can be represented as a linear combination of any vectors that are linearly independent. For example let's take the vectors,

Linear independence means that no vector in the set can be formed as a linear combination of the others, i.e. each vector branches out into a new dimension, and they do not all lie in a zero volume subspace of . Equivalently, there is no vector that can mulitply to produce the zero vector:

Mathematically, this is true if and only if:

So for example any data vector, , can be represented in terms of three linearly independent basis vectors, , using a set of coefficients contained say in some (unique) coefficient vector, :

A linear representation of the data is a fixed basis set, , that is used to represent each data point:

If we collect the vectors and , the we can write,

where is the data matrix, is the matrix of basis vectors, and is the coefficient (or loading, or weight) matrix, with giving the "coordinates" of the point in the coordinate space represented by the basis .

We have assumed thus far that the data itself is "full rank", i.e. that there exists a set of data vectors that are linearly independent. It may happen, however, that the data do not lie in the "full volume" of , but rather occupy a subspace of smaller dimension.

In three dimensions, for example, all of the data might exist in a two-dimensional subspace.

R3 3.pngR3 4.png

The data is still represented as points or vectors in three dimensional space, with three coordinates, but in fact only two coordinates are required (once a "center" point has been fixed in the subspace).

Even if the data does not lie exactly in a subspace, it may be the case that one of dimensions (directions) is just numerical noise. Eliminating such extraneous dimensions can lead to more efficient and stable subsequent processing of the data.

To understand how the data occupies the space volumetrically, and in the case of data that is not full rank, how to determine which subspace the data lies in, we will use Principle Component Analysis, described in the next section.

Principle Component Analysis (PCA)

Let the data be represented by an (channels by time points) matrix , which we're thinking of as a set of vectors contained in the columns. Let us also assume that the data is "zero mean", i.e. that the mean of each channel (row of ) has been removed (subtracted from the row), so that:

Now, one way to determine the rank of the data is to examine the covariance matrix, or matrix of channel correlations, which is defined by,

The matrix has the same rank, or intrinsic dimensionality, as the matrix . If we perform an eigen-decomposition of , we get,

where and are the eigenvalues and eigenvectors respectively.

Since is symmetric and "positive semidefinite", all the eigenvalues are real and non-negative. (and thus ) is full rank if and only if all eigenvalues are strictly positive, i.e. .

If some of the eigenvalues are zero, then the data is not full rank, and the rank is equal to the number of nonzero eigenvalues. In this case, the data lies entirely in the -dimensional subspace spanned by the eigenvectors corresponding to the nonzero eigenvalues.

and,

where is the data matrix, is the matrix of basis vectors, and is the coefficient matrix, with giving the "coordinates" of the point in the -dimensional space of the nonzero eigenvectors.

The data is reduced in dimension from to by "projecting" onto the -dimensional space,

Analysis may be conducted on the reduced data , e.g. ICA may be performed, giving results in dimensional space. The coordinates in the original dimensional data space are then given by simply multiplying the dimensional vectors by . The , rank , matrix ,

in this case is called a "projection matrix", projecting the data in the full space onto the subspace spanned by the first eigenvectors.

Singular Value Decomposition (SVD)

A related decomposition, called the Singular Value Decomposition (SVD), can be performed directly on the data matrix itself to produce a linear representation (of possibly reduced rank). The SVD decomposes the data matrix into,

where is the data matrix, is the matrix of ortho-normal (orthogonal and unit norm) "left eigenvectors", is the diagonal matrix of strictly positive "singular values", and is the matrix of orthonormal "right eigenvectors".

From the SVD, we see that,

so that and . The SVD directly gives the linear representation:

where . The vectors in are orthonormal (orthogonal and unit norm), and the rows of are orthogonal (since is diagonal, and is orthonormal.)

The SVD gives the unique linear representation (assuming singular values are distinct) of the data matrix such that the columns of are orthonormal, and the rows of are orthogonal (not necessarily unit norm: ). (The SVD is actually only unique when the singular values are all distinct; a subspace determined by equal singular values does not have a unique orthonormal basis in this subspace, allowing for arbitrary cancelling rotations of the left and right eigenvectors in this subspace.)

Having the rows of be orthogonal, i.e. uncorrelated, is a desirable feature of the representation, but having the basis vectors be orthonormal is overly restrictive in many cases of interest, like EEG. However, if we only require the rows of to be orthogonal, then we lose the uniqueness of the representation, since for any orthonormal matrix , and any full rank diagonal matrix , we have,

where the rows of the new coefficient matrix are still orthogonal, but the new matrix of basis vectors in the columns of, , are no longer orthogonal.

A linear representation of the data,

implies that the coefficients can be recovered from the data using the inverse of (or in the case of rank deficient , any left inverse, like the pseudoinverse):

PCA and Sphering

We have seen that the SVD representation is one linear representation of the data matrix. The SVD puts,

where is the identity matrix.

Another representation, which we call "sphering", puts,

This latter representation has certain advantages. We can show, e.g., that the sphering transformation leaves the data changed as little as possible among all "whitening" transformations, i.e. those that leave the resulting rows of the coefficient matrix uncorrelated with unit average power.

This is equivalent to taking . Let the general form of a "whitening" decorrelating transformation, then, be:

for arbitrary orthonormal matrix . We measure the distance of the transformed data from the original data by the sum of the squared errors:

Writing in the general form of the decorrelating transformation, we get,

Equality is achieved in the last inequality if and only if . The resulting minimal squared error is the same squared error that would be result from simply normalizing the variance of each channel, which is equivalent to the transformation .


We shall refer to this particular whitening transformation, , as the "sphering" transformation. This matrix is also referred to as the inverse of the "square root" of the covariance matrix . It is the unique symmetric matrix satisfying, .


Remarks:

We can view this result as saying that the whitening matrix changes the data as little as possible, where the data is viewed either as a collection of channel vectors, or as a collection of channel time series. The result is not surprising since using undoes the initial rotation performed by .

We have found in practice, performing ICA on EEG data, that using the (symmetric) sphering matrix as an initialization of for ICA generally yields the best results and the quickest convergence, especially in comparison with the whitening matrix , which might suggest that the former whitening transformation produces more independent components than the latter. This is confirmed empirically in our mutual information computations.

Why should the sphering matrix produce more independent time series and a better starting point for ICA than the whitening matrix ? In the case of EEG, this is likely due to the fact that the EEG sensor electrodes are spread out at distances of the same order as the distance between the EEG sources. Thus the sources tend to have a much larger effect on a relatively small number of sensors, rather than a moderate effect on all of the sensors.

The whitening matrix , in projecting the data onto the eigenvectors of the covariance matrix, produces time series that are each mixtures of all of the channels, and in this sense more mixed than the original data, in which the sources distribute over a relatively small number of channels.

The sphering matrix on the other hand, rotates the transformed data back into its original coordinates, and produces time series that are closest to the original data, which was relatively independent at the start.

By leaving the data in the eigenvector coordinate system, the whitening matrix forces the ICA algorithm to “undo” a great deal of mixing in the time series, and as a starting point for iterative algorithms, makes it more difficult (in terms of potential local optima) and more time consuming (since the starting point is farther from the ICA optimum).

EEG Data Reference and Re-referencing

EEG data is recorded as a potential difference between the electrode location and the reference. Biosemi active recordings use a reference that is separate from the scalp electrodes. If data is recorded with a specific electrode reference, then the data essentially includes a "zero" channel corresponding to the signal at the reference location relative to itself.

A commonly used reference is the "average reference", which consists essentially of subtracting the mean scalp potential at each time point from the recorded channel potential. Let the vector of all ones be denoted, . If the data is denoted , then average referenced data is equivalent to,

The average reference reduces the rank of the data because the referencing matrix is rank (note that if you include the original reference when computing average reference, average reference does not reduce the rank of the data). In particular, the vector is in the "null space" of the referencing matrix:

The left-hand side is transformed as

Here, the (1/\mathbf{n}) is key since (\mathbf{e}^T\! * \mathbf{e})/\mathbf{n} = 1. Therefore,


Re-referencing to a specific channel or channels can be represented similarly. Let the vector with one in the jth position be denoted

Suppose e.g. that the mastoid electrode numbers are and . Then the linked mastoid re-reference is equivalent to:

Again, however, is in the null space of this referencing matrix, showing that the rank is . Any referencing matrix will be rank deficient, and will thus leave the data rank deficient by one dimension.

In addition to referencing, EEG pre-processing usually includes high-pass filtering (to reduce non-stationarity caused by slow drifts). Linear filtering (such as high, low, band-pass, FIR, IIR, etc.) can be represented as a matrix multiplication of the data on the right by a large matrix whose columns are time shifted versions of each other. The combined referencing and filtering operations can be represented as:

The resulting referenced and filtered matrix should remain rank deficient by one. However when referencing is done first, reducing the rank by one, and then filtering is performed, it may happen that the rank of the data increases so that it becomes essentially full rank again. This is apparently due to numerical effects of multiplying (in effect) by a matrix .

To summarize, re-referencing should reduce the rank of the data, relegating it to an dimensional subspace of the -dimensional channel space. However, subsequent filtering of the rank-reduced referenced data may increase the rank of the data again (so that the minimum singular value is significantly larger than zero.) In this case, numerical noise in the vector (direction) is essentially added back into the data as an independent component.