[Eeglablist] Lack of convergence in AMICA solution

Scott Makeig smakeig at ucsd.edu
Tue Oct 20 15:02:48 PDT 2020


Farhan -

On Wed, Sep 30, 2020 at 12:56 AM Farhan Ali (Asst Prof) <
farhan.ali at nie.edu.sg> wrote:

> Thanks, Scott. Some quick responses below:
>
> >> Then I might well expect to find some different 'brain' ICs in the
> different models...  as we do.  (p.s. How many channels were recorded?)
>
> 14. After ASR, typically only 10-11 channels remain “good” using default
> settings.
>
> This is a different regime than we usually work in (32-356 channels) ...
With so few degrees of freedom, ICA is constrained to fit >>14 ~independent
processes. Thus the run-to-run variability. However, it is few enough that
running multi-model AMICA many times and clustering solutions (e.g.,
clustering models with matching domains, or and models with similar IC
scalp maps or and IC time courses in those domains...) would be possible -
a nice experiment in itself...


> >> Wouldn't this be expected?  Were the ICs strikingly non-dipolar (high
> resid. var.)?
>
> I expect different ICs across models of the same run.
>
> Many of the ICs should be ~the same in the different models (accounting
for eye blink artifact, for example)...


> But the issue is for the same dataset, when AMICA is run twice, the ICs
> for say model 1, run 1, cannot be found in model 1 run 2, or model 2 run 2,
> etc. This was the original problem I posted to the board about.
>
> You could use matcorr on the IC scalp maps, then the sum(abs())/nchans of
the corr vector output as model an [0,1] (IC scalp map-wise) similarity
measure to pair or cluster models - but do not forget the importance of the
model domains (= the model-preferred data points)...

And the ICs are highly dipolar. When I run dipfit for a single dipole
> source, the residual variance is typically less than 10%.
>
> With so few channels, achieving this is not so difficult/unexpected as
with, e.g., 256 channels...


> >> Might you try computing the mutual information reduction (MIR) in the
> data by single-model ICA vs multimodel ICA (after breaking the data into
> the different model ranges).
>
> Good idea. I compared AMICA with 1 model vs. AMICA with 5 models of the
> same exact dataset and ran the MIR with the post-AMICA utility. The
> multi-model ICA produced 2-3 times higher IC to IC MIR (averaged across all
> pairwise IC comparisons and models) than the single-model ICA for a few of
> the datasets I checked. So, multi-model ICA is somewhat justified?
>
> To check: In these comparisons you checked MIR for the set of time points
in the domains of both models?

But multi-model ICA has many more parameters. From a model-fitting
> perspective, should I penalize for that? Say using AIC or BIC? However, a
> priori there is no reason to think a more complex model will necessarily
> produce more independent sources unlike say regression in the context of
> over-fitting, so am no sure any penalty is necessary. Any advice?
>
> Well, with so few channels, underfitting will be a problem than
overfitting, no matter how many (reasonable number) of models you compute
....  you might try fitting, say, 20 models to see what happens...

>
>
> >> It doesn't sound surprising, since ICA has already 'honed in' on stable
> IC processes. Thus, unlikely that multi-model ICA would highlight
> interesting differences e.g. between conditions. (p.s. conditions = what
> here?)
>
> Conditions = subjects undergoing different activities in a real-world
> classroom. In one condition, they were listening to a lecture by a teacher;
> in another condition, they were watching a video projected to the front of
> the class, etc. It’s basically the Dikker et al., (2017). *Current
> Biology* dataset that we got our hands on and are analysing.
>
>
>
>
>
> Regards,
>
> Farhan
>

> Scott

>
>
> *From:* Scott Makeig <smakeig at ucsd.edu>
> *Sent:* 30 September 2020 04:31
> *To:* Farhan Ali (Asst Prof) <farhan.ali at nie.edu.sg>
> *Cc:* Jason Palmer <japalmer29 at gmail.com>; Shawn Hsu <
> goodshawn12 at gmail.com>; eeglablist at sccn.ucsd.edu; Zeynep Akalin Acar <
> zeynep at sccn.ucsd.edu>
> *Subject:* Re: [Eeglablist] Lack of convergence in AMICA solution
>
>
>
> Farhan - See my >> comment below.  -Scott
>
>
>
> On Tue, Sep 29, 2020 at 4:06 AM Farhan Ali (Asst Prof) <
> farhan.ali at nie.edu.sg> wrote:
>
> Thanks Scott for the insightful comments. Some responses below.
>
>
>
>  *- Perhaps there were no meaningful state transitions in your data, thus
> the 'extra' multiple models latched onto various noise data features.*  * See
> Hsu et al. 2018 on interpretation of multiple models in datasets involving
> state changes.*
>
> Possible. However, the data is a concatenation of 4 blocks representing 4
> different conditions that the subjects underwent and in some (but not all),
> we do see different models being dominant in different conditions (somewhat
> similar to Hsu et al. 2018, NeuroImage), so we think at least in some
> subjects, multiple models may have some basis.
>
> >> Then I might well expect to find some different 'brain' ICs in the
> different models...  as we do.  (p.s. How many channels were recorded?)
>
> >> Note: By 'brain' ICs above I mean those with resid. var. to the (single
> or dual symmetric) dipole model < ~15%.
>
>
>
> *- How did you see that the solutions were 'very different'?  Different
> time domains for the 3 models? Different IC maps?  The post-AMICA menu
> invokes tools to plot the domains of the different models.*
>
> Different IC scalp maps by visual inspection. The differences across runs
> were quite obvious, so I didn’t even bother doing statistical comparisons.
>
> >> Wouldn't this be expected?  Were the ICs strikingly non-dipolar (high
> resid. var.)?
>
> >> Might you try computing the mutual information reduction (MIR) in the
> data by single-model ICA vs multimodel ICA (after breaking the data into
> the different model ranges).
>
> >> Does multimodel AMICA decomposition effect stronger MIR than single
> model? If not, then there is likely to be no meaningful changes in
> source-space configuration revealed by multi-model decomposition.
>
>
>
> *- One should expect that most ICs should be shared by all models (e.g.,
> ICs accounting for eyeblink artifact? etc.).*
>
> It’s true that some ICs are common, particularly horizontal eye movements,
> but the majority of the ICs are not shared across models within an
> individual run, at least in our data.
>
>  >> Again, without saying what differences in which kinds of IC maps, it
> is hard to say anything...
>
>
>
> *- Perhaps the dimension of your recording (10-12 channels) was
> insufficient to create a stable decomposition -- more independent sources
> than channels...*
>
> I find this explanation more likely and related to my earlier point. When
> we do an initial ICA decomposition assuming stationarity, then select only
> brain ICs, re-project them, followed by subsequent AMICA, the final
> solutions we get are very stable. All runs on the same data produce almost
> identical AMICA solutions. This pipeline likely reduced the dimensionality
> of the data (less number of independent components) to then allow AMICA to
> converge. That’s my interpretation. What do you think of this pipeline?
>
> >> It doesn't sound surprising, since ICA has already 'honed in' on stable
> IC processes. Thus, unlikely that multi-model ICA would highlight
> interesting differences e.g. between conditions. (p.s. conditions = what
> here?)
>
>
>
> * -  What ASR processes did you employ? Data correction?*
>
> I used the *Reject data using Clean RawData and ASR* option in EEGLab
>
> >> I'll leave suggestions on this to others.  -Scott
>
>
>
> --
>
> Scott Makeig, Research Scientist and Director, Swartz Center for
> Computational Neuroscience, Institute for Neural Computation, University of
> California San Diego, La Jolla CA 92093-0961, http://sccn.ucsd.edu/~scott
> ------------------------------
>
> CONFIDENTIALITY: This email is intended solely for the person(s) named and
> may be confidential and/or privileged. If you are not the intended
> recipient, please delete it, notify us and do not copy, use, or disclose
> its contents.
> Towards a sustainable earth: Print only when necessary. Thank you.
>


-- 
Scott Makeig, Research Scientist and Director, Swartz Center for
Computational Neuroscience, Institute for Neural Computation, University of
California San Diego, La Jolla CA 92093-0961, http://sccn.ucsd.edu/~scott



More information about the eeglablist mailing list