[Eeglablist] Guidance for exploratory study

Makoto Miyakoshi mmiyakoshi at ucsd.edu
Mon Oct 17 22:35:11 PDT 2016


Dear Rich,

> I've done the tutorial and reviewed hundreds of Eeglablist entries.

I appreciate you took effort to be prepared.

> What a wonderfully supportive community.

I agree. Jerry Swartz will be happy to hear that!

> How should I organized the data files for import?  Right now I have one
data file for each subject.  Should I divvy into 120 separate files (8
subjects x 15 data blocks) with each block uniquely named and import them
separately?

As long as you have event markers that indicate onsets and offsets of each
'data blocks' (which is called epochs which means trials), you don't need
to chop it into the pieces. Just process as they are continuous, and epoch
them into the same shorter length right before final averaging etc.

> As to artifact cleaning, two questions - 1) If I reject artifacts by eye,
wouldn't that lead to data blocks of different lengths and so complicate
analysis?

Yes, it changes the number of epohcs==trials across conditions and
subjects. But usually the differences in number of epochs==trials does not
matter since people only use one average value per subject (statistically
speaking, this could be arguable though.)

> 2) Or should I use one of the automated methods available via EEGLAB
extensions (eg, clean_rawdata, PREPPipeline, ADJUST, AAR) and if so, which
do you recommend?

clean_rawdata() is recommended. Also, there is a wiki page describing all
the tips about preprocessings.
https://sccn.ucsd.edu/wiki/Makoto's_preprocessing_pipeline

Makoto



On Wed, Sep 21, 2016 at 12:15 PM, Ingram, Richard E - ingramre <
ingramre at jmu.edu> wrote:

> Greetings EEGLAB community,
>
> I'm an EEGLAB newbie.  I've done the tutorial and reviewed hundreds of
> Eeglablist entries.  What a wonderfully supportive community.  I wonder if
> you might have a few tips on how best to approach analyzing data from an
> exploratory study?
>
> I have 8 subjects, each of whom has 15 30-second time blocks of continuous
> EEG data (sampled at 128 Hz from 14 channels) reflecting differences in
> task difficulty (3 levels) and task condition (3 conditions).  I wrote
> software that handles the stimuli display and enters markers (generated by
> mouseclick) into the datastream .  For this first look, I'm more interested
> in characterizing the continuous data than event-locked data (although I
> would like to come back to event-locked data later).
>
> How should I organized the data files for import?  Right now I have one
> data file for each subject.  Should I divvy into 120 separate files (8
> subjects x 15 data blocks) with each block uniquely named and import them
> separately?
>
> As to artifact cleaning, two questions - 1) If I reject artifacts by eye,
> wouldn't that lead to data blocks of different lengths and so complicate
> analysis?  2) Or should I use one of the automated methods available via
> EEGLAB extensions (eg, clean_rawdata, PREPPipeline, ADJUST, AAR) and if so,
> which do you recommend?
>
> I know these questions may be simplistic but I do appreciate any and all
> guidance you may provide.
>
> Rich
>
>
>
>
> _______________________________________________
> Eeglablist page: http://sccn.ucsd.edu/eeglab/eeglabmail.html
> To unsubscribe, send an empty email to eeglablist-unsubscribe at sccn.
> ucsd.edu
> For digest mode, send an email with the subject "set digest mime" to
> eeglablist-request at sccn.ucsd.edu
>



-- 
Makoto Miyakoshi
Swartz Center for Computational Neuroscience
Institute for Neural Computation, University of California San Diego
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://sccn.ucsd.edu/pipermail/eeglablist/attachments/20161017/ba47ff18/attachment.html>


More information about the eeglablist mailing list