[Eeglablist] ICA decomposition with 128 versus 64 channels?

Jürgen Kayser kayserj at pi.cpmc.columbia.edu
Fri Jan 20 08:56:08 PST 2006


Matthew:

You might be interested in looking at the following article, which will be 
published in the February 2006 issue of Clinical Neurophysiology (currently 
available via the DOI pointer at Elsevier's web site).  

Kayser, J., Tenke, C.E. (2006). Principal components analysis of Laplacian 
waveforms as a generic method for identifying ERP generator patterns: II. 
Adequacy of low-density estimates. Clinical Neurophysiology, 117(2), in 
press.  

http://dx.doi.org/10.1016/j.clinph.2005.08.033

Our report investigates the benefits of using high- vs. low-density EEG 
montages (129 vs. 31 channels) for typical ERP group data, which is likely to 
be relevant to your question. Individual topographic specificity of ERP 
components derived from high-resolution ERP/CSD data is largely lost in 
ERP group data, because averaging across subjects results in a spatial low-
pass filter. If the focus is on brain processes that can be generalized to the 
population under study, there seems to be no immediate gain of high-density 
recordings. These findings may come as a surprise as they seem to 
contradict common ERP knowledge based on previous recommendations 
using simulated and individual ERP data. Consequently, a (clinical) ERP 
researcher would be well-advised to consider the costs and benefits of 
engaging in high-density EEG recordings.  

Best, Jürgen and Craig



On 18 Jan 2006 at 23:41, Matthew Belmonte wrote:

> I'm in the process of putting together a proposal for an EEG facility, and
> would like to select hardware with EEGLAB processing in mind.  I'm approaching
> this from perhaps a bit of a dated perspective: in 1996 I was using only 16
> channels and homebrewed software for time-frequency analysis, and I've spent
> the intervening decade working exclusively with fMRI.
> 
> I've heard from one EEGLAB user that 128 channels don't confer much advantage
> over 64, since inputs must be spatially downsampled in order to be processed
> practically on typical computing hardware, and since the independent components
> of interest (those from neural sources) don't become much cleaner with 128
> inputs as compared to 64.  (The tradeoff of spatial resolution and SNR to
> electrode application time also is a consideration; we'd be recording from
> autistic children and couldn't afford any great deal of time spent fiddling.)
> 
> I'd like to hear from EEGLAB users (and developers!) with experience at 128
> and/or 64 channels:  Do you find a 64-channel system adequate?  What
> improvement in data quality has moving to 128 channels given you?  If I loaded
> up a GNU/Linux system with the most RAM that I could get (16GB on an IBM
> IntelliStation), would it be able to handle an ICA decomposition of 128-channel
> data without thrashing, or would I be doubling my investment in amplifiers only
> to have to mix down 128 signals to 64 before ICA?  And, even if it would be
> computationally practical, would it be scientifically useful enough to justify
> the extra preparation time?
> 
> Many thanks
> 
> Matthew Belmonte <mkb30 at cam.ac.uk>
> _______________________________________________
> eeglablist mailing list eeglablist at sccn.ucsd.edu
> Eeglablist page: http://sccn.ucsd.edu/eeglab/eeglabmail.html
> To unsubscribe, send an empty email to eeglablist-unsubscribe at sccn.ucsd.edu






More information about the eeglablist mailing list