[Eeglablist] Concerning the definition of ERSP

Arnaud Delorme arno at ucsd.edu
Thu Jul 30 19:51:02 PDT 2009


The "brainwave gain model" divide the power (or subtract log power  
which is equivalent). This means that in this scheme, stimuli will  
modulate power. This is the model that we have been using in Scott's  
group over the past decade and that has been originally implemented in  
EEGLAB.

Standard baseline normalization is performed on raw power itself and  
does not involve any log (in contrast to Makeig 1993 I guess). It is  
more of an "brainwave additive model". In this scheme, stimuli will  
"add" power not modulate existing power. This is the model usually  
used by Olivier Bertrand and Tallon Baudry.

We believe more in the gain model, just because it might be closer to  
biology. It is easier to imagine that neural assembly activity will be  
modulated rather than created from scratch. The normalization scheme,  
although it has some mathematical advantage is also harder to justify  
biologically (what does it mean at the neural level to divide by the  
standard deviation after removing the mean). It would be worth testing  
rigorously at the neural level (on the LFP) which model makes more  
sense (with stimuli of different strength or amplitude?). As always in  
Biology both models are probably correct in different circonstances.

In the latest newtimef function in EEGLAB (and since only about a  
month) you can either use the gain model (divisive baseline) or  
normalization baseline, so you may look at both. You may also use log  
or not. In most cases, time-frequency decomposition will be relatively  
equivalent.

Arno

On Jul 30, 2009, at 3:28 PM, Thomas Ferree wrote:

>
> Aleksander,
> I can't speak for the history, but conceptually it makes more sense  
> to me
> to average across epochs before taking the difference.  First, the  
> power
> spectrum computed from a single trial has high variance.  Dividing  
> by a
> single trial spectrum could potentially involve dividing by some very
> small numbers at certain time-frequency points, which would inflate
> the importance of that trial in the average.  Second, the  
> statistical theory
> of power spectral estimation states that the log of the trial-averaged
> power is chi^2 distributed.  So taking the difference of those  
> averages
> is taking the difference of two approximately normal distributions,
> which makes sense.  There may be other considerations that I am
> overlooking.  I will be interested to hear what others comment.
>
> -- 
> Thomas Ferree, PhD
> Department of Radiology
> UT Southwestern Medical Center
> Email: tom.ferree at gmail.com
> Voice: (214) 648-9767
>
> On Thu, Jul 30, 2009 at 6:09 AM, Aleksander Alafuzoff <aleksander.alafuzoff at helsinki.fi 
> > wrote:
> Hi,
>
> While (re-)reading Makeig's 1993 article, which introduced the
> ERSP-measure, and Delorme & Makeig's 2004 EEGLAB article, I noticed a
> slight discrepancy in the way ERSP is defined. In case I've failed to
> take something into account or misunderstood the articles, please
> correct me.
>
> In Makeig's original article the "mean subject ERSP" is the average of
> baseline normalised epoch/trial ERSP's, where each epoch's time
> localised spectra are normalised by that epoch's baseline. In the more
> recent EEGLAB article ERSP is defined as the mean square amplitude of
> spectral estimate F_k (averaged over epochs k) divided by the mean
> baseline power spectrum*.
>
> Conceptually this is a big difference: the 1993 ERSP is in principle
> defined for individual epochs, while ERSP according to the 2004
> article is only defined relative to a set of epochs. In general the
> two definitions would also seem to give slightly different estimates
> of the "spectral perturbation", although, if we assume that the
> underlying baseline power spectrum is constant, the definitions are
> equivalent.
>
> At face value, the older ERSP defintion, where each spectral estimate
> is normalised relative to the local baseline, would seem more
> appropriate, since it seems inevitable that there must be some
> variation in the baseline during any experiment taking more than a few
> minutes (if for no other reason than a drop in subject alertness). Why
> then does EEGLAB use the latter definition (in both the article and
> the code as far as I can see)? The difference between the two ERSP
> estimates would seem to be quite small, so is there perhaps a
> computational advantage in the latter formulation?
>
> * In both articles the values are also log-transformed, but as far as
> I can see this is irrelevant for the current concern.
>
> --
> Aleksander Alafuzoff
> Research assistant
> Cognitive Science Unit,
> University of Helsinki
> _______________________________________________
> Eeglablist page: http://sccn.ucsd.edu/eeglab/eeglabmail.html
> To unsubscribe, send an empty email to eeglablist-unsubscribe at sccn.ucsd.edu
> For digest mode, send an email with the subject "set digest mime" to eeglablist-request at sccn.ucsd.edu
>
>
>
> <ATT00001.txt>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://sccn.ucsd.edu/pipermail/eeglablist/attachments/20090730/c1f37019/attachment.html>


More information about the eeglablist mailing list