[Eeglablist] Unstable and inconsistent validation in SIFT

boglyas kamanbalazs at gmail.com
Tue Aug 20 04:58:51 PDT 2013


Dear list,

I'm writing about two things:

- I'm using SIFT for my study on different subject groups. After trying out
numerous parameter sets, 0.35 second window length, 0.01 second step size
and 21 model order proved to be the best option. It works really well on
the majority of the datasets (all datasets are: 250 Hz sampling rate, 60
epoch, 1250  frames per epoch,  5 second long trials:  1 second 'nothing',
1 second preparation [cross appeared on the screen], 3 seconds of motor
imaginary [according to arrow direction, right hand, left hand, legs]), but
for one of the datasets it gives really strange validation results.

The figure of validation is the following:
http://i.imgur.com/AKbCojc.png

100% whiteness, but there are some windows which are inconsistent and
unstable (for the other datasets the validation gives 100% stability and ~
85% consistency)

I tried different parameters, about 10 parameter sets, and it seems, that
in case of high model fitting (high whiteness) the model is unstable or
inconsistent.

What's your opinion, can I use this dataset with the fitted model, if the
number of the unstable/inconsistent windows are low? Or it is
mathematically incorrect, and it will have significant effect on the
results of the whole recording? Should I keep on trying to find parameters
that give correct validation?

- My second question would be:

Is it possible, that the validation figure is correct, but the results are
false? Probably if the result seems to be nonsense, the problem is with the
original EEG file (noise, etc.), just I want to be sure by asking you.

Thanks in advance,

Balazs Kaman
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://sccn.ucsd.edu/pipermail/eeglablist/attachments/20130820/6fd9b967/attachment.html>


More information about the eeglablist mailing list