[Eeglablist] Grand Average
Stephen Politzer-Ahles
politzerahless at gmail.com
Mon Oct 22 17:18:30 PDT 2012
Hello Alberto,
There may be discussion of this issue in Luck (2005) and/or Handy (2004);
if there is, you can ignore what I say and check those instead.
My assumption, though, is that the reason we typically average them the way
we do, instead of using a weighted average, is that more epochs does not
necessarily mean better data. It's true that an insufficient number of
epochs (and/or subjects) will make the ERP noisy. But once you reach a
certain point, adding more epochs does not make the data a lot better (see
Luck's (2005) discussion of the signal-to-noise ratio). Each subject is
meant to be one datapoint, so once a given subject reaches the threshhold
after which she has "enough" trials to make a good ERP, then it's fair to
make that subject a datapoint.
Also, of course, the characteristics of the ERP components you are
interested in are likely to differ across subjects; some people may have a
bigger P300 or N400 or whatnot overall. There is not necessarily a
straightforward relationship between this and how clean their data are
(i.e., it's not necessarily the case that someone who has a bigger/smaller
P300 also happens to blink more/less during the experiment). Thus, by
weighting subjects differently because of how many clean epochs they
happened to have, you may be inadvertently biasing your grand averages
towards certain individuals. At least when you treat all subjects equally,
you are neutral as far as that is concerned.
Those are just my impressions; I don't know if there is published
literature discussing this topic, and if there is then it of course is a
better reference than my impressions!
Best,
Steve
On Mon, Oct 22, 2012 at 7:51 AM, Alberto Gonzalez V
<vilanova5 at hotmail.com>wrote:
> Hi to all,
>
> I have a question about ERP methodology. Consider that we record the EEG
> during and task in 3 subjects, then we do the averages ( considering that
> the task has 60 epochs):
> Subject 1 did a perfect task, so we did the average with 60
> epochs.
> Subject 2 had some problems during the recording, and the average
> was done with 40 epochs.
> Subject 3 had only 20 epochs, but we think that it´s enough and
> did the average.
>
> So the Subj 1 has all the epochs =1, Subj 2 has = 2/3 of the epochs, and
> Subj 3 has only =1/3. But in the grand averages we treat them as they had
> all the epochs (=1). Isn't better to give each subject a proportional value
> (considering it's number of epochs) in the grand average(something like:
> ([Subj1*1]+[Subj2*2/3]+[Subj3*1/3])/2)?.
>
> Thanks for your time!!!
> _______________________________________________
> Eeglablist page: http://sccn.ucsd.edu/eeglab/eeglabmail.html
> To unsubscribe, send an empty email to
> eeglablist-unsubscribe at sccn.ucsd.edu
> For digest mode, send an email with the subject "set digest mime" to
> eeglablist-request at sccn.ucsd.edu
>
--
Stephen Politzer-Ahles
University of Kansas
Linguistics Department
http://people.ku.edu/~sjpa/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://sccn.ucsd.edu/pipermail/eeglablist/attachments/20121022/6692ac7f/attachment.html>
More information about the eeglablist
mailing list