<div dir="ltr">Dear Chris,<div><br></div><div><span style="font-size:small;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">> however I'm unclear on how the post-newtimef calculation would look.</span><br></div><div><br></div><div>Although not described in newtimef() help, if you see the function line 369,</div><div><br></div><div>[P,R,mbase,timesout,freqs,Pboot,Rboot,alltfX,PA] = newtimef( data, frames, tlimits, Fs, varwin, varargin);<br></div><div><br></div><div>The 8th output alltfX is a 3-D tensor of complex values, namely freq x time x trials. To calculate single-trial phase, simply angle(alltfX).</div><div><br></div><div><span style="font-size:small;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">> 2) A little more unorthodox, I was hoping to include ICA information for 1-minute epochs across the entire recording to see if there are changes in the predominant components as the stimulus progresses. Trying to simply segment an EEG file into minutes and run ICA on each segment resulted in a truly enormous amount of data when testing it with just one subject (~60GB), and that would certainly be prohibitive to including it in our analysis. Is there any way that this might be done more efficiently such that the resulting data is of a mroe manageable size?</span><br></div><div><br></div><div>Even if you have 256 ch data for 240 sliding windows, the total size of the weight matrix is double(256 x 256 x 240) = 126MB per subject. You need to generate and save this matrix for EEG.icaweights and EEG.icasphere, so you need 126 x 2 = 252 MB. I don't know why you say you need 60GB, unless you have 240 subjects x 252 MB = 60GB in which case you have no choice.</div><div><br></div><div>If you haven't done so, consider saving only <span style="font-size:small;background-color:rgb(255,255,255);text-decoration-style:initial;text-decoration-color:initial;float:none;display:inline">EEG.icaweights and EEG.icasphere for each iteration.</span></div><div><br></div><div>Makoto</div><br><div class="gmail_quote"><div dir="ltr">On Sun, Jul 15, 2018 at 9:11 PM Chris Rose <<a href="mailto:u6t9n7@u.northwestern.edu">u6t9n7@u.northwestern.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div><div>Hi all,<br><br></div>I am working on a project where we'll be using A.I. to do an exploratory analysis on recordings taking during the viewing of a stimulus that lasted ~90minutes. Given the length of the recording period and the number of subjects(60), we are trying to be selective about what information we use as learning inputs to avoid overloading the AI algorithm.<br><br></div>There are two things that I was hoping to include as inputs in the AI analysis that I'm having trouble getting.<br><br></div>1) I am hoping to include phase calculations taken from FFT of the standard frequency bands. We've been using spectopo to calculate the power in each band, but I don't see any option to calculate phase with this function. <a href="https://sccn.ucsd.edu/pipermail/eeglablist/2011/003837.html" target="_blank">This thread</a> from a previous eeglablist question seems to have a similar question, and the original writer says they solved their problem by using newtimef outputs to calculate phase for each band/channel, however I'm unclear on how the post-newtimef calculation would look. Since the data set is so large, I'm hoping to avoid actually calculating coherence prior to the AI, and simply let the neural net calculate it internally.<br><br></div>2) A little more unorthodox, I was hoping to include ICA information for 1-minute epochs across the entire recording to see if there are changes in the predominant components as the stimulus progresses. Trying to simply segment an EEG file into minutes and run ICA on each segment resulted in a truly enormous amount of data when testing it with just one subject (~60GB), and that would certainly be prohibitive to including it in our analysis. Is there any way that this might be done more efficiently such that the resulting data is of a mroe manageable size?<br><br></div>Thanks in advance for any thoughts you can offer with either issue!<br><br></div>Best,<br><br></div>Chris<br></div>
_______________________________________________<br>
Eeglablist page: <a href="http://sccn.ucsd.edu/eeglab/eeglabmail.html" rel="noreferrer" target="_blank">http://sccn.ucsd.edu/eeglab/eeglabmail.html</a><br>
To unsubscribe, send an empty email to <a href="mailto:eeglablist-unsubscribe@sccn.ucsd.edu" target="_blank">eeglablist-unsubscribe@sccn.ucsd.edu</a><br>
For digest mode, send an email with the subject "set digest mime" to <a href="mailto:eeglablist-request@sccn.ucsd.edu" target="_blank">eeglablist-request@sccn.ucsd.edu</a></blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr">Makoto Miyakoshi<br>Swartz Center for Computational Neuroscience<br>Institute for Neural Computation, University of California San Diego<br></div></div></div>