[Eeglablist] Questions on FFT phase results and ICA methodology

Chris Rose u6t9n7 at u.northwestern.edu
Wed Jul 18 14:44:36 PDT 2018


Hi Makoto,

Thanks for the response! I'll play around with the newtimef info you sent,
and may have a couple followup questions.

As for the ICA, skipping generation of the additional ICA statistics made a
huge difference in bringing down the file size. I've also been handling the
sliding windows somewhat crudely, just breaking the necessary segment for
each window into a new set and processing the ICA from there; however, the
data is already processed through regepoch with 1s intervals, is there a
way to have runica(or any of the other ICA functions) process the timeline
in a sliding fashion every specific number of epochs using command inputs
instead? Didn't see anything in the help files that indicated this was the
case, but it seemed like a cleaner solution.

Best,

Chris

On 17 July 2018 at 16:46, Makoto Miyakoshi <mmiyakoshi at ucsd.edu> wrote:

> Dear Chris,
>
> > however I'm unclear on how the post-newtimef calculation would look.
>
> Although not described in newtimef() help, if you see the function line
> 369,
>
> [P,R,mbase,timesout,freqs,Pboot,Rboot,alltfX,PA] = newtimef( data,
> frames, tlimits, Fs, varwin, varargin);
>
> The 8th output alltfX is a 3-D tensor of complex values, namely freq x
> time x trials. To calculate single-trial phase, simply angle(alltfX).
>
> > 2) A little more unorthodox, I was hoping to include ICA information for
> 1-minute epochs across the entire recording to see if there are changes in
> the predominant components as the stimulus progresses. Trying to simply
> segment an EEG file into minutes and run ICA on each segment resulted in a
> truly enormous amount of data when testing it with just one subject
> (~60GB), and that would certainly be prohibitive to including it in our
> analysis. Is there any way that this might be done more efficiently such
> that the resulting data is of a mroe manageable size?
>
> Even if you have 256 ch data for 240 sliding windows, the total size of
> the weight matrix is double(256 x 256 x 240) = 126MB per subject. You need
> to generate and save this matrix for EEG.icaweights and EEG.icasphere, so
> you need 126 x 2 = 252 MB. I don't know why you say you need 60GB, unless
> you have 240 subjects x 252 MB = 60GB in which case you have no choice.
>
> If you haven't done so, consider saving only EEG.icaweights and
> EEG.icasphere for each iteration.
>
> Makoto
>
> On Sun, Jul 15, 2018 at 9:11 PM Chris Rose <u6t9n7 at u.northwestern.edu>
> wrote:
>
>> Hi all,
>>
>> I am working on a project where we'll be using A.I. to do an exploratory
>> analysis on recordings taking during the viewing of a stimulus that lasted
>> ~90minutes. Given the length of the recording period and the number of
>> subjects(60), we are trying to be selective about what information we use
>> as learning inputs to avoid overloading the AI algorithm.
>>
>> There are two things that I was hoping to include as inputs in the AI
>> analysis that I'm having trouble getting.
>>
>> 1) I am hoping to include phase calculations taken from FFT of the
>> standard frequency bands. We've been using spectopo to calculate the power
>> in each band, but I don't see any option to calculate phase with this
>> function. This thread
>> <https://sccn.ucsd.edu/pipermail/eeglablist/2011/003837.html> from a
>> previous eeglablist question seems to have a similar question, and the
>> original writer says they solved their problem by using newtimef outputs to
>> calculate phase for each band/channel, however I'm unclear on how the
>> post-newtimef calculation would look. Since the data set is so large, I'm
>> hoping to avoid actually calculating coherence prior to the AI, and simply
>> let the neural net calculate it internally.
>>
>> 2) A little more unorthodox, I was hoping to include ICA information for
>> 1-minute epochs across the entire recording to see if there are changes in
>> the predominant components as the stimulus progresses. Trying to simply
>> segment an EEG file into minutes and run ICA on each segment resulted in a
>> truly enormous amount of data when testing it with just one subject
>> (~60GB), and that would certainly be prohibitive to including it in our
>> analysis. Is there any way that this might be done more efficiently such
>> that the resulting data is of a mroe manageable size?
>>
>> Thanks in advance for any thoughts you can offer with either issue!
>>
>> Best,
>>
>> Chris
>> _______________________________________________
>> Eeglablist page: http://sccn.ucsd.edu/eeglab/eeglabmail.html
>> To unsubscribe, send an empty email to eeglablist-unsubscribe at sccn.
>> ucsd.edu
>> For digest mode, send an email with the subject "set digest mime" to
>> eeglablist-request at sccn.ucsd.edu
>
>
>
> --
> Makoto Miyakoshi
> Swartz Center for Computational Neuroscience
> Institute for Neural Computation, University of California San Diego
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://sccn.ucsd.edu/pipermail/eeglablist/attachments/20180718/3d929b92/attachment.html>


More information about the eeglablist mailing list