[Eeglablist] Unstable and inconsistent validation in SIFT

boglyas kamanbalazs at gmail.com
Wed Aug 21 06:46:52 PDT 2013


Dear Tim,

Since I used the GUI, the 'cross-correlation time lags' was the default
number, which is 50 (I found it in est_checkMVARWhiteness.m).

About the other question, my parameters that I've found for AMVAR model
fitting:

Viera-Morf
0.35 [s] window length
0.01 [s] step size
21 model order

It gives really good validation results 90% of time (apart from the
unstable one). I attached some validation graphs:

http://i.imgur.com/t9XS2ve.png
http://i.imgur.com/jbrJlU3.png
http://i.imgur.com/UPXrVLv.png

Especially the first one seems to be "too good to be true". What do you
think, is it possible that this is just a bug?
(the connectivity results after the model fitting seems to make sense, we
got what we were looking for)

Balazs


2013/8/20 Tim Mullen <mullen.tim at gmail.com>

> Stability here is a key factor. If the model is unstable, the
> autocorrelation and multivariate portmanteau tests are not meaningful, as
> some basic assumptions underlying the test are violated.
> Any statistical results derived from an unstable model should be rejected.
>
> On the other hand, it's possible that because you have a short time window
> (and high model order), estimates of residual cross-correlation are poorly
> estimated at larger time lags leading to bias in the whiteness tests.
> How many cross-correlation time lags are being used for the whiteness
> tests here?
>
> Tim
>
>
> On Tue, Aug 20, 2013 at 4:58 AM, boglyas <kamanbalazs at gmail.com> wrote:
>
>> Dear list,
>>
>> I'm writing about two things:
>>
>> - I'm using SIFT for my study on different subject groups. After trying
>> out numerous parameter sets, 0.35 second window length, 0.01 second step
>> size and 21 model order proved to be the best option. It works really well
>> on the majority of the datasets (all datasets are: 250 Hz sampling rate, 60
>> epoch, 1250  frames per epoch,  5 second long trials:  1 second 'nothing',
>> 1 second preparation [cross appeared on the screen], 3 seconds of motor
>> imaginary [according to arrow direction, right hand, left hand, legs]), but
>> for one of the datasets it gives really strange validation results.
>>
>> The figure of validation is the following:
>> http://i.imgur.com/AKbCojc.png
>>
>> 100% whiteness, but there are some windows which are inconsistent and
>> unstable (for the other datasets the validation gives 100% stability and ~
>> 85% consistency)
>>
>> I tried different parameters, about 10 parameter sets, and it seems, that
>> in case of high model fitting (high whiteness) the model is unstable or
>> inconsistent.
>>
>> What's your opinion, can I use this dataset with the fitted model, if the
>> number of the unstable/inconsistent windows are low? Or it is
>> mathematically incorrect, and it will have significant effect on the
>> results of the whole recording? Should I keep on trying to find parameters
>> that give correct validation?
>>
>> - My second question would be:
>>
>> Is it possible, that the validation figure is correct, but the results
>> are false? Probably if the result seems to be nonsense, the problem is with
>> the original EEG file (noise, etc.), just I want to be sure by asking you.
>>
>> Thanks in advance,
>>
>> Balazs Kaman
>>
>> _______________________________________________
>> Eeglablist page: http://sccn.ucsd.edu/eeglab/eeglabmail.html
>> To unsubscribe, send an empty email to
>> eeglablist-unsubscribe at sccn.ucsd.edu
>> For digest mode, send an email with the subject "set digest mime" to
>> eeglablist-request at sccn.ucsd.edu
>>
>
>
>
> --
> ---------  αντίληψη -----------
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://sccn.ucsd.edu/pipermail/eeglablist/attachments/20130821/a9e4c16c/attachment.html>


More information about the eeglablist mailing list