[Eeglablist] Large-volume cross-coherence analysis

Will Sedley willsedley at gmail.com
Tue Mar 29 11:22:24 PDT 2011


Dear Arno and Thomas,

Many thanks for your helpful suggestions.

I have been able to achieve a similar effect with relatively minimal
computation by using single-trial phase locking values (SPLVs) instead
of coherence.

To do this I do the following:
Run newtimef on each channel and save the tfdata variable (last
optional output; freq x time x trial matrix)
Take the imaginary part of tfdata for each channel, use it to
calculate the phase angle and save this in a 4D matrix of channel x
freq x time x trial
Then for each channel pair calculate the SPLV at every time/frequency
point by taking the exponential of i times the phase angle difference
(done quickly by performing on the whole 3D matrix of freq x time x
trial at once).

Of course SPLVs are different to coherence, and might be inferior
particularly for beamformer virtual electrode data where correlated
sources cancel each other out, so I am keen to explore alternatives
such as you have both suggested.

Thanks again,
Will


On 29 March 2011 18:46, Arnaud Delorme <arno at ucsd.edu> wrote:
> Hi Thomas and Will,
> EEGLAB does not store the FFT for individual channels, so if you want to
> compute the coherence between all pairs of channels, the FFT will have to
> recomputed for every pair. I agree that this is far from being optimal.
> However, coherence analysis should be replaced by partial coherence analysis
> that considers the multi-variate EEG data channels (or ICA sources) and
> remove spurious (indirect) coherence. The SIFT toolbox (Source Information
> Flow Toolbox) of Tim Mullen that is built on top of EEGLAB does exactly that
> (and many other things).
> http://sccn.ucsd.edu/wiki/SIFT
> Best,
> Arno
> On Feb 23, 2011, at 9:08 AM, Thomas Ferree wrote:
>
> Will,
> I am not sure how EEGLAB is implementing.  For N electrodes, there are
> N(N-1)/2 unique pairs.  If EEGLAB is computing coherence for every channel
> pair independently, depending upon the implementation, that might involve
> computing N(N-1)/2 FFTs for each pair, and that would be slow.  The most
> computationally efficient way would be to compute FFTs for each channel once
> and for all, then put them into an array, then read that array to compute
> the coherence for the pairs.  Perhaps Arno can comment on which of these
> approaches is being used in EEGLAB?
> Also, a standard trick in any spectral analysis is to make sure that you
> have windows with length T equal to power of 2, e.g., 512 points not 500
> points in each window, because this makes computation of FFT scale as TlogT,
> not T^2.
> Finally, a qualitatively different way to approach the problem of many
> electrodes is to compute the so-called canonical coherence between sets of
> electrodes.  That is a reasonable approach if you are only interested in
> coherence between hemispheres, areas, etc.  Here are some references that
> might be helpful:
> Daniel J. Fletchera, Jonathan Rub, George Feina*  Intra-hemispheric alpha
> coherence decreases with increasing cognitive impairment in HIV patients.
>  Electroencephalography and clinical Neurophysiology 102 (1997) 286-294 –
> Careful there is a mathematical typo in the appendix.
> A. A. Lyubushin, Jr.  Analysis of Canonical Coherences in the Problems of
> Geophysical Monitoring Izvestiya, Physics of the Solid Earth, Vol. 34, No.
> 1, 1998, pp. 52–58. Translated from Fizika Zemli, No. 1, 1998, pp. 59–66.
>
> David J. Thomson
>
> GENERALIZED COHERENCES BETWEEN VECTOR DATA ON TWO SPACECRAFT
> 1-4244-0309-X/06/$20.00 ©2006 IEEE
>
>
>
> J. TIMMER et al,
>
> CROSS-SPECTRAL ANALYSIS OF TREMOR TIME SERIES
>
>
>
> International Journal of Bifurcation and Chaos, Vol. 10, No. 11 (2000)
> 2595–2610
>
>
>
>
>
>
>
>
>
>
>
> I have implemented a canonical coherence function that reads in the EEGLAB
> data structure, and am willing to share if you are interested.
> Best,
> Tom.
> --
> Thomas Ferree, PhD
> Cell: (415) 577-1285
> On Mon, Feb 21, 2011 at 10:06 AM, Will Sedley <willsedley at gmail.com> wrote:
>>
>> Dear EEGlab experts,
>>
>> I would be grateful for any help on this.
>>
>> I have a dataset of about 190 channels (which I have cut down from
>> around 1400), and am needing to compute cross-coherence between every
>> channel and every other channel.
>>
>> Presently I am running newcrossf for every channel combination, which
>> means running it approx 18,000 times per condition. I have limited the
>> resolution to 2 time points and 30 frequency to speed things up, but
>> it still takes 5-10 hours to run.
>>
>> I am wondering if there is a way to do this a more
>> computationally-efficient way. e.g. is there a way to compute
>> coherence just based on the complex part of the last output argument
>> of newtimef? Then perhaps I could just run a time/frequency
>> decomposition once per channel, and then do some further computations
>> to obtain the coherences.
>>
>> If anyone can suggest how to go about this (or an equivalent method),
>> or even provide any code to help with this, then I would be extremely
>> grateful.
>>
>> Many thanks in advance and best wishes,
>> Will
>> _______________________________________________
>> Eeglablist page: http://sccn.ucsd.edu/eeglab/eeglabmail.html
>> To unsubscribe, send an empty email to
>> eeglablist-unsubscribe at sccn.ucsd.edu
>> For digest mode, send an email with the subject "set digest mime" to
>> eeglablist-request at sccn.ucsd.edu
>
>
>
> _______________________________________________
> Eeglablist page: http://sccn.ucsd.edu/eeglab/eeglabmail.html
> To unsubscribe, send an empty email to eeglablist-unsubscribe at sccn.ucsd.edu
> For digest mode, send an email with the subject "set digest mime" to
> eeglablist-request at sccn.ucsd.edu
>




More information about the eeglablist mailing list