[Eeglablist] Lack of convergence in AMICA solution

Farhan Ali (Asst Prof) farhan.ali at nie.edu.sg
Tue Sep 29 01:05:57 PDT 2020


Thanks Scott for the insightful comments. Some responses below.

- Perhaps there were no meaningful state transitions in your data, thus the 'extra' multiple models latched onto various noise data features.   See Hsu et al. 2018 on interpretation of multiple models in datasets involving state changes.
Possible. However, the data is a concatenation of 4 blocks representing 4 different conditions that the subjects underwent and in some (but not all), we do see different models being dominant in different conditions (somewhat similar to Hsu et al. 2018, NeuroImage), so we think at least in some subjects, multiple models may have some basis.

- Perhaps the dimension of your recording (10-12 channels) was insufficient to create a stable decomposition -- more independent sources than channels...
I find this explanation more likely and related to my earlier point. When we do an initial ICA decomposition assuming stationarity, then select only brain ICs, re-project them, followed by subsequent AMICA, the final solutions we get are very stable. All runs on the same data produce almost identical AMICA solutions. This pipeline likely reduced the dimensionality of the data (less number of independent components) to then allow AMICA to converge. That’s my interpretation. What do you think of this pipeline?

 -  What ASR processes did you employ? Data correction?
I used the Reject data using Clean RawData and ASR option in EEGLab

- How did you see that the solutions were 'very different'?  Different time domains for the 3 models? Different IC maps?  The post-AMICA menu invokes tools to plot the domains of the different models.
Different IC scalp maps by visual inspection. The differences across runs were quite obvious, so I didn’t even bother doing statistical comparisons.

- One should expect that most ICs should be shared by all models (e.g., ICs accounting for eyeblink artifact? etc.).
It’s true that some ICs are common, particularly horizontal eye movements, but the majority of the ICs are not shared across models within an individual run, at least in our data.


Regards,
Farhan

From: Scott Makeig <smakeig at gmail.com>
Sent: 28 September 2020 22:56
To: Farhan Ali (Asst Prof) <farhan.ali at nie.edu.sg>; Jason Palmer <japalmer29 at gmail.com>; Shawn Hsu <goodshawn12 at gmail.com>
Cc: eeglablist at sccn.ucsd.edu
Subject: Re: [Eeglablist] Lack of convergence in AMICA solution

Farhan -

You raise good questions. In fact, I know of no systematic assessment of the stability of multi-model AMICA decomposition. I suspect, in your case, the following:
- Perhaps there were no meaningful state transitions in your data, thus the 'extra' multiple models latched onto various noise data features.
   See Hsu et al. 2018 on interpretation of multiple models in datasets involving state changes.
- Perhaps the dimension of your recording (10-12 channels) was insufficient to create a stable decomposition -- more independent sources than channels...
   What ASR processes did you employ? Data correction?
- How did you see that the solutions were 'very different'?  Different time domains for the 3 models? Different IC maps?
   The post-AMICA menu invokes tools to plot the domains of the different models.
- One should expect that most ICs should be shared by all models (e.g., ICs accounting for eyeblink artifact? etc.).

Scott Makeig

On Sun, Sep 27, 2020 at 10:33 PM Farhan Ali (Asst Prof) <farhan.ali at nie.edu.sg<mailto:farhan.ali at nie.edu.sg>> wrote:
Hi EEG experts,

I'm hoping to use AMICA to assess brain dynamics EEG dataset.

I'm running into problems of the AMICA model solutions being very different across different runs of the exact same input data. This suggests that the estimation is not converging to a global solution. The dataset is as follows: 10-12 channels (after ASR), ~100k samples (128 Hz). The AMICA run on a local computer: 5 states/models, 3 generalized Gaussian mixtures for each component, with default settings for everything else. I tried changing all sorts of settings including running for over 20,000 iterations (took hours). Nothing helped with convergence across runs.

The only thing that helped was to do an initial ICA decomposition, choose a subset of components (high probability of brain signals), project back to channel space, then run AMICA on this data with extended infomax with 1 Gaussian (instead of generalized Gaussian). Then I get convergence in AMICA solutions across runs. However, within each run, all models returned all have almost identical IC scalp topographies with only differences in ordering and Gaussian PDFs. I suppose doing an initial ICA assuming stationarity reduces dimensionality of the data and effectively makes the subsequent AMICA models share ICs? But how then do I interpret the different models? Do the ordering differences and the variance in the Gaussian PDFs become the basis for interpreting the models since the IC scalp topographies are identical?

Would anyone be able to give some insights? Thanks.

Regards,
Farhan
National Institute of Education, Singapore

________________________________

CONFIDENTIALITY: This email is intended solely for the person(s) named and may be confidential and/or privileged. If you are not the intended recipient, please delete it, notify us and do not copy, use, or disclose its contents.
Towards a sustainable earth: Print only when necessary. Thank you.
_______________________________________________
Eeglablist page: http://sccn.ucsd.edu/eeglab/eeglabmail.html
To unsubscribe, send an empty email to eeglablist-unsubscribe at sccn.ucsd.edu<mailto:eeglablist-unsubscribe at sccn.ucsd.edu>
For digest mode, send an email with the subject "set digest mime" to eeglablist-request at sccn.ucsd.edu<mailto:eeglablist-request at sccn.ucsd.edu>


--
Scott Makeig, Research Scientist and Director, Swartz Center for Computational Neuroscience, Institute for Neural Computation, University of California San Diego, La Jolla CA 92093-0559, http://sccn.ucsd.edu/~scott<http://sccn.ucsd.edu/%7Escott>


More information about the eeglablist mailing list