[Eeglablist] ICA decomposition with 128 versus 64 channels?

Matthew Belmonte mkb30 at cam.ac.uk
Wed Jan 18 15:41:05 PST 2006


I'm in the process of putting together a proposal for an EEG facility, and
would like to select hardware with EEGLAB processing in mind.  I'm approaching
this from perhaps a bit of a dated perspective: in 1996 I was using only 16
channels and homebrewed software for time-frequency analysis, and I've spent
the intervening decade working exclusively with fMRI.

I've heard from one EEGLAB user that 128 channels don't confer much advantage
over 64, since inputs must be spatially downsampled in order to be processed
practically on typical computing hardware, and since the independent components
of interest (those from neural sources) don't become much cleaner with 128
inputs as compared to 64.  (The tradeoff of spatial resolution and SNR to
electrode application time also is a consideration; we'd be recording from
autistic children and couldn't afford any great deal of time spent fiddling.)

I'd like to hear from EEGLAB users (and developers!) with experience at 128
and/or 64 channels:  Do you find a 64-channel system adequate?  What
improvement in data quality has moving to 128 channels given you?  If I loaded
up a GNU/Linux system with the most RAM that I could get (16GB on an IBM
IntelliStation), would it be able to handle an ICA decomposition of 128-channel
data without thrashing, or would I be doubling my investment in amplifiers only
to have to mix down 128 signals to 64 before ICA?  And, even if it would be
computationally practical, would it be scientifically useful enough to justify
the extra preparation time?

Many thanks

Matthew Belmonte <mkb30 at cam.ac.uk>



More information about the eeglablist mailing list