[Eeglablist] On ICA based artifact rejection

Steve Luck sjluck at ucdavis.edu
Thu Sep 13 11:23:10 PDT 2012


Eren and Joe:  I'm not an expert on using ICA -- this is just the advice I received many years ago from Arno and Scott.  The idea is that baseline correction might change the scalp distribution of various components on each trial, which would prevent ICA from working properly.  However, I don't know whether baseline correction ends up being worthwhile on balance despite this potential problem.  It may, as Joe suggests, depend on factors such as whether the data have been high-pass filtered and the nature of the slow drifts.

In general, we need more sophisticated simulation studies to answer questions like this.  (This is a hint for you technically inclined graduate students and postdocs!)

Steve


> From: Joseph Dien <jdien07 at mac.com>
> Subject: Re: [Eeglablist] On ICA based artifact rejection
> Date: September 12, 2012 6:56:29 PM PDT
> To: "Gunseli, E." <e.gunseli at vu.nl>
> Cc: "eeglablist at sccn.ucsd.edu" <eeglablist at sccn.ucsd.edu>
> 
> 
> All due respect to Steve (who runs a great course), I'd need some persuading to follow this advice.  The entire reason we baseline correct is that without it we can't actually be sure what the true zero is due to slow drifts in the data.  Much of the relative values that are being removed we want to remove because they reflect these slow but, over time, substantial drifts.  In my own data, I've found that I get better results (in terms of clean removal of blink artifacts) with ICA artifact correction of epoched data if it is baseline corrected first because it helps minimize the effects of these drifts.  I haven't subjected this question to systematic evaluation however.  I imagine it would also depend on the characteristics of the recording equipment and filter settings and so forth.
> 
> Joe
> 
> On Sep 12, 2012, at 7:02 AM, "Gunseli, E." <e.gunseli at vu.nl> wrote:
> 
>> Dear all,
>>  
>> I have a question about the steps that should be taken before running ICA.
>>  
>> If we are going to epoch the data (step 5), why are we manually rejecting the extra noisy parts earlier (step 2).
>> Since these extra noisy portions are mostly at beginning and end of trial blocks, they will be gone away during epoching anyway.
>> So, epoching the critical time window can save a fair amount of time that else we would have spent on manual inspection.
>>  
>> At this point I have another question; I have read that, it is better to run ICA on continuous, non-epoched data.
>> One of the problems of running ICA on epoched data is that, “the baseline correction changes relative values across channels” (S. Luck, ERP Boot Camp Lecture Slides). But probably that is not the only reason to run ICA on continuous data because this problem can easily be overcome via removing the baseline after running ICAs.
>> So I guess there should be other problems related to running ICAs on epoched data.
>> Can anyone provide information about these potential problems?
>>  
>> Kind regards,
>> Eren
>>  

--------------------------------------------------------------------
Steven J. Luck, Ph.D.
Director, Center for Mind & Brain
Professor, Department of Psychology
University of California, Davis
Room 109
267 Cousteau Place
Davis, CA 95618
(530) 297-4424
E-Mail: sjluck at ucdavis.edu
Web: http://mindbrain.ucdavis.edu/people/sjluck
Calendar: http://www.google.com/calendar/embed?src=stevenjluck%40gmail.com&ctz=America/Los_Angeles
--------------------------------------------------------------------







-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://sccn.ucsd.edu/pipermail/eeglablist/attachments/20120913/271376bc/attachment.html>


More information about the eeglablist mailing list