[Eeglablist] statistics in EEGLAB

Robert Brown bobrobbrown at googlemail.com
Wed Nov 18 02:39:16 PST 2009


thank you very much for your responses,

I'll try to proceed with the strategy proposed by myself and supported by
Scott and Guillaume (1. do the single trial analysis in a single subject. 2.
look, if and where there is overlap in the effects).

one question remains (just to be sure to be doing the right thing): is the
eeglab newtimef function in case of condition comparisons providing me the
required "single subject single trial" analysis? (that is, if the
alpha-level output is showing me regions where in case of this subject there
are significant differences between the conditions as evaluated on basis of
single trials. I could not be sure of this, because the alpha level is also
computed for the conditions). if not, then can you please lead me to the
more appropriate way of doing this in eeglab (I know this is possible in
fieldtrip, but I've already done all the other analysis and preprocessing in
eeglab).

thank you very much for your time and for your ideas.

best,
Bob

2009/11/17 Guillaume Rousselet <g.rousselet at psy.gla.ac.uk>

> Robert,
>
> there is no theoretical reason to limit your statistical analyses to group
> effects. It can be easily argued that you will have actually more power when
> you do a comparison across 200 trials, rather than across 10 subjects.
> Chances that a robust effect will occur by chance in 4 subjects are almost
> null if you have enough trials and a clean signal. So you could do your
> stats across trials in each subject, show the data for each subject, and
> then report something like the number of subjects showing a significant
> effect at any given time point, electrode, or for a given cluster, ICA...
>
> I've been exploring the possibilities of taking into account the variance
> within observers in the following papers:
>
> Rousselet, G. A., Husk, J. S., Bennett, P. J., & Sekuler, A. B. (2008). Time
> course and robustness of ERP object and face differences. *Journal of
> Vision, 8*(12), 3, 1-18, http://journalofvision.org/18/12/13/,
> doi:10.1167/1168.1112.1163.
> Rousselet, G. A., Pernet, C. R., Bennett, P. J., & Sekuler, A. B. (2008).
> Parametric study of EEG sensitivity to phase noise during face processing.
> *BMC Neuroscience, 9:98*,
> http://www.biomedcentral.com/1471-2202/1479/1498/.
>
> Best,
>
> GAR
>
>
>
>
> On 13 Nov 2009, at 18:48, Scott Makeig wrote:
>
> I agree. For example, if there are 3 subjects, then simple binomial
> probability can give no better a result than p <= 12.5%.  However, in the
> case that each single-subject effect, across single trials, is significant
> (e.g., at the p < .001% level), a much stronger inference can be derived
> using reasonable subject distribution assumptions.
>
> Scott
>
> On Thu, Nov 12, 2009 at 4:03 AM, Robert Brown <bobrobbrown at googlemail.com>wrote:
>
>> Dear Arno and All,
>>
>> thank you very much for your enlightening response.
>>
>> maybe one idea: let's say that I only have 4 subjects. the statistics
>> based on "subject means" would be unreliable and I would not get any
>> results. however, it could be that in case of each single subject there is a
>> significant difference based on trials in the same time window, which would
>> actually be a strong evidence for differences between the conditions and
>> which could be written as "in case of each single subject p < .05
>> (corrected)". I am sorry if this is not right, but I assume that there could
>> be instances where the group statistics with 3-4 subjects would not show
>> anything but the single trial statistics would. (good examples of important
>> studies with so few subjects would be Tong & Engel, 2001 in Nature with 4
>> subjects fMRI and Resulaj et al., 2009 in Nature with 3 subjects behavior.).
>>
>>
>> to conclude: maybe the single trial statistics would work, if it a) would
>> be calculated individually for each subject based on only this subjects
>> single trials and then b) the (time-frequency) regions would be plotted,
>> where all the subjects have significant differences based on their single
>> trial analysis.
>>
>> thank you for your attention and good luck,
>>
>> Bob
>>
>> 2009/11/11 Arnaud Delorme <arno at ucsd.edu>
>>
>>> Dear Bob,
>>>
>>> thanks for the comments. I think you are using the statmode option
>>> "trial" from the command line. This option is quite experimental. It was
>>> implemented a while ago and is probably not forward compatible with more
>>> recent changes. Also, the "statmode", "trials" option (assuming it was
>>> working) should only be used to plot a single subjects. The reason is based
>>> on the type of null hypothesis.
>>>
>>> When testing with 'statmode', 'subject' for two conditions, the NULL
>>> hypothesis is: given the subjects I have recorded and given that these
>>> subjects are a good representation of the general population of all possible
>>> subjects, there is no difference between the ERP/spectrum/ERSP/ITC between
>>> the two experimental conditions in the general subject population. Using
>>> parametric, permutation, or bootstrap statistics (and assumptions) you may
>>> either accept or reject this hypothesis at a given confidence level.
>>>
>>> When testing with 'statmode', 'trial' on a single subject (still two
>>> conditions), the NULL hypothesis is : given the trials I have recorded and
>>> given that these trials are a good representation of all the population of
>>> trials for this subject, there is no difference between the
>>> ERP/spectrum/ERSP/ITC between the two experimental conditions for this
>>> subject. Again, using parametric, permutation, or bootstrap statistics (and
>>> assumptions) you may either accept or reject this hypothesis at a given
>>> confidence level.
>>>
>>> As you can see the two hypothesis are quite different. One makes an
>>> inference about the population of subjects and the other one about the
>>> population of trials.
>>>
>>> Now if you pool the trials from different subjects and attempt to perform
>>> statistics, this is going to be more complex. The new hypothesis would then
>>> be: given the trials I have recorded from my subjects and given that these
>>> trials are a good representation of all the population of trials from the
>>> general population of subjects, there is no difference between the
>>> ERP/spectrum/ERSP/ITC between the two experimental conditions in the general
>>> population of subjects. But the hypothesis is relatively biased because I
>>> personally think that all the trials are *not* a good representation of
>>> all the population of trials from the general population of subjects. The
>>> trials are a good representation of all the trials from all the subjects
>>> being presently recorded but not necessarily of the general subject
>>> population. Therefore the real NULL hypothesis would be : given the trials I
>>> have recorded from all of my subjects and given that these trials are a good
>>> representation of all the population of trials from these subjects, there is
>>> no difference between the ERP/spectrum/ERSP/ITC between the two experimental
>>> conditions in the recorded subjects. As you see, rejecting the NULL this is
>>> relatively limited as we care about the general population of subjects and
>>> not the recorded subjects.
>>>
>>> If anybody has some better ideas (or Matlab function) of how to handle
>>> the subject/trial problem (because it would be nice to include trials in
>>> statistical analysis in order to make them more powerful), we will take
>>> them.
>>>
>>> Best,
>>>
>>> Arno
>>>
>>> ps: we will remove the 'statmode', 'trial' option for now.
>>> pps: for basic inferential statistics, you may also refer to this book
>>> chapter http://sccn.ucsd.edu/~arno/mypapers/statistics.pdf<http://sccn.ucsd.edu/%7Earno/mypapers/statistics.pdf>
>>>
>>> On Nov 11, 2009, at 12:29 AM, Robert Brown wrote:
>>>
>>> Dear Arno & others,
>>>
>>> this does not seem to be as simple as Arno suggested (but thanks),
>>>
>>> 1. I have precomputed the values of these channels (with "savetrials",
>>> "on")
>>> 2. these channels all have data
>>> 3. I can plot the data of the same channels when I use "statmode",
>>> "subjects"
>>> 4. I'm using EEGLAB v7.1.3.13b
>>> 5. I now tried it with v7.1.7.18b and I still get the log of zero error
>>> (you guys might be interested that in addition I now get, in case of
>>> permutations and bootstrap, "??? Error using ==> reshape" in
>>> statcond>surrogate at 438 and statcond at 301 and with this latest version
>>> the reshape error even happens with the "statmode", "subjects")
>>>
>>> thus any other suggestions of what could be happening with my single
>>> trial analysis in study would be very much appreciated.
>>>
>>> thank you very much and take care,
>>> Bob
>>>
>>> 2009/11/11 Arnaud Delorme <arno at ucsd.edu>
>>>
>>>> Dear Bob,
>>>>
>>>> I think this might be because you are trying to plot ERSP of a channel
>>>> that contains only 0. This error was also arising in old versions of EEGLAB
>>>> when masking for significance.
>>>>
>>>> Hope this helps,
>>>>
>>>> Arno
>>>>
>>>>
>>>> On Nov 7, 2009, at 11:38 AM, Robert Brown wrote:
>>>>
>>>>  Hi guys,
>>>>>
>>>>> I've been trying to get the study ersp analysis working on single
>>>>> trials but I've not succeeded.
>>>>>
>>>>> in the function "std_readdata" I get the "Warning: Log of zero." error,
>>>>> which is on the line ersp{c,g} = 20*log10(abs(ersp{c,g})); meaning that the
>>>>> absolute value at some point is 0.
>>>>> (This leads to) further errors:
>>>>>
>>>>> ??? Error using ==> set
>>>>> Bad value for axes property: 'CLim'
>>>>> Values must be increasing and non-NaN.
>>>>>
>>>>> Error in ==> caxis at 80
>>>>>            set(ax,'CLim',arg);
>>>>>
>>>>> Error in ==> tftopo at 714
>>>>> caxis([g.limits(5:6)]);
>>>>>
>>>>> I've tried to fix it but I'm not clever enough. any help would be
>>>>> appreciated.
>>>>>
>>>>> thank you so much,,
>>>>> Bob <ATT00001.txt>
>>>>>
>>>>
>>>>
>>>
>>>
>>
>> _______________________________________________
>> Eeglablist page: http://sccn.ucsd.edu/eeglab/eeglabmail.html
>> To unsubscribe, send an empty email to
>> eeglablist-unsubscribe at sccn.ucsd.edu
>> For digest mode, send an email with the subject "set digest mime" to
>> eeglablist-request at sccn.ucsd.edu
>>
>
>
>
> --
> Scott Makeig, Research Scientist and Director, Swartz Center for
> Computational Neuroscience, Institute for Neural Computation, University of
> California San Diego, La Jolla CA 92093-0961, http://sccn.ucsd.edu/~scott<http://sccn.ucsd.edu/%7Escott>
>  _______________________________________________
> Eeglablist page: http://sccn.ucsd.edu/eeglab/eeglabmail.html
> To unsubscribe, send an empty email to
> eeglablist-unsubscribe at sccn.ucsd.edu
> For digest mode, send an email with the subject "set digest mime" to
> eeglablist-request at sccn.ucsd.edu
>
>
>
>
>
> ************************************************************************************
> *Guillaume A. Rousselet, Ph.D.*
> *
> *
> Lecturer
>
> Centre for Cognitive Neuroimaging (CCNi)
> Department of Psychology
> Faculty of Information & Mathematical Sciences (FIMS)
> University of Glasgow
> 58 Hillhead Street
> Glasgow, UK
> G12 8QB
>
> The University of Glasgow, charity number SC004401
>
> http://web.me.com/rousseg/GARs_website/
>
> Email: g.rousselet at psy.gla.ac.uk
> Fax. +44 (0)141 330 4606
> Tel. +44 (0)141 330 6652
> Cell +44 (0)791 779 7833
>
> *
> *
> *"For reasons I wish I understood, the spectacle of sync strikes a chord
> in us, somewhere deep in our souls. It's a wonderful and terrifying thing.
> Unlike many other phenomena, the witnessing of it touches people at a primal
> level. Maybe we instinctively realize that if we ever find the source of
> spontaneous order, we will have discovered the secret of the universe.*"
>
>
> Steven Strogatz - *Sync* - 2003
>
> ************************************************************************************
>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://sccn.ucsd.edu/pipermail/eeglablist/attachments/20091118/8f115c04/attachment.html>


More information about the eeglablist mailing list