[Eeglablist] Continuous data - baseline correction

Tarik S Bel-Bahar tarikbelbahar at gmail.com
Sun Jan 31 17:36:37 PST 2016


Hello Liz, here's some quick thoughts below.

If you haven't had a chance to please check GoogleScholar  methods and
baselines in other navigation studies using eeg, such as those from
Gramman, and in other published ustudies continuous data.See several papers
that might be of interest to you about baselining (e.g., Delorme
form Delorme and colleagues)

-consider some baseline by taking something like at least 100ms  and upto
500 ms before video onset, or before the cue to make a decision.

-consider any period where you know there is regularly no stimulation or
active perception/decision-making in all the participants can be useful.

- consider subtracting the mean of whole trial/epoch as a way to baseline.

-consider a baseline of some kind (wherever you get it from) can help
event-related signal pop out. That being said, if you are averaging
multiple trials/epochs from one condition, you should be getting some
picture of what might unique to that condition - without any baselining.

-consider developing a general average baseline by using information from
across all conditions.

-i don't think you will be in big trouble if you compare two continuous
conditions within subjects without baselining the trials/epochs. I've seen
papers find ways to baseline, and others that don't baseline, and other
that don't care.

-its' probably safest looking at regional power spectrum changes between
conditions and groups. However you should see some effects in
time-frequency across your conditions even without baselining.

-There are more and more studies using continuous data in vvarious ways
that have similar issues, you are not alone.









On Mon, Jan 25, 2016 at 4:57 PM, Liz Chrastil <chrastil at bu.edu> wrote:

> Hello List
>
> I'm new to EEG, and have a challenging situation when trying to do
> baseline correction.
>
> My study is looking at a fairly continuous task.  It's a navigation study,
> and participants are spending ~8 min exploring a new virtual environment.
> This exploration is broken up into choice points at intersections (they are
> not moving, and they press a button to indicate which way to turn or to go
> straight), and then a video showing the turn (or forward movement if they
> go straight).  However, we don't have a section that has no task at all,
> because we found that was too distracting to participants when they're
> trying to learn something.
>
> We have two groups of participants (there is an additional experimental
> manipulation), and so are primarily going to be doing group analysis, but
> we might also want to do a few analyses on within-subjects differences.
> For example, we might contrast theta during the video vs decision point
> epochs.  Times for videos range from 1500ms to 5000ms, decision points
> could vary quite a bit (depending on how long the person thinks about it),
> but average around 2000ms.
>
> My question is, if we don't do any baseline correction for individual
> epochs (and just do some general filtering), is this going to lead to a lot
> of trouble?  Again, we're not really sure what we would baseline correct to
> in this situation.  We could try correcting the videos to the decision
> points, but that's basically like doing some other task and could lead to
> worse errors.  Or is this a tragic design flaw?
>
> Thanks for your help!
>
>
>
> _______________________________________________
> Eeglablist page: http://sccn.ucsd.edu/eeglab/eeglabmail.html
> To unsubscribe, send an empty email to
> eeglablist-unsubscribe at sccn.ucsd.edu
> For digest mode, send an email with the subject "set digest mime" to
> eeglablist-request at sccn.ucsd.edu
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://sccn.ucsd.edu/pipermail/eeglablist/attachments/20160131/1b2d8e69/attachment.html>


More information about the eeglablist mailing list