[Eeglablist] AMICA lrate gets stuck

Norman Forschack forschack at cbs.mpg.de
Tue Nov 10 13:02:00 PST 2015


Hi Jason and Makoto,

in this thread Jason announced a debugged version of amica coming soon. I just wanted to kindly ask whether this is already done or if we should apply the mentioned workaround?

Thanks a lot!
Norman

----- On Aug 25, 2015, at 6:28 AM, Jason Palmer japalmer29 at gmail.com wrote:

> Sorry this line should be:
> 
> 
> 
>>> [W,S,mods] = runamica12(newdat, ....);
> 
> 
> 
> 
> 
> 
> From: Jason Palmer [mailto:japalmer at ucsd.edu]
> Sent: Monday, August 24, 2015 9:17 PM
> To: 'Kevin Tan'; 'mmiyakoshi at ucsd.edu'; 'EEGLAB List'
> Subject: RE: [Eeglablist] AMICA lrate gets stuck
> 
> 
> 
> 
> Hi Kevin,
> 
> 
> 
> Sorry, there is a bug in the code tries to take the sqrt of the negative
> eigenvalue (even though that dimension is being removed) making the LL=NaN and
> aborting. The eigenvalue is actually essentially zero, resulting from rank
> deficiency likely due to re-referencing, so more cleaning won’t necessarily
> change the arbitrarily small value to positive, unless you increase the
> dimensionality of the data. I’m currently fixing this bug along with some other
> sphering and PCA issues and will release debugged versions soon.
> 
> 
> 
> For now you can do the work around of sphering before Amica, e.g.
> 
> 
> 
>>> [U,D] = eig(cov(EEG.data(:,:)’));
> 
>>> U = fliplr(U); D = fliplr(flipud(D)); % make descending order
> 
>>> dd = diag(D); numeig = sum(dd > 1e-9);
> 
>>> Sph = diag(sqrt(1./dd(1:numeig))) * U(:,1:numeig)‘;
> 
>>> newdat = Sph * EEG.data(:,:); % reduce to numeigs dimensions
> 
>>> [W,S,mods] = runamica12(EEG.data(:,:), ....);
> 
>>> EEG.icasphere = S*Sph;
> 
>>> EEG.icaweights = W;
> 
> 
> 
> This should run amica on full rank data an avoid the negative near zero
> eigenvalue problem until it is fixed in the Amica code.
> 
> 
> 
> Best,
> 
> Jason
> 
> 
> 
> 
> 
> 
> 
> From: Kevin Tan [ mailto:kevintan at cmu.edu ]
> Sent: Monday, August 24, 2015 6:39 PM
> To: Jason Palmer; mmiyakoshi at ucsd.edu ; EEGLAB List
> Subject: Re: [Eeglablist] AMICA lrate gets stuck
> 
> 
> 
> 
> Hi Jason,
> 
> 
> 
> 
> 
> I'm running into a negative min eigenvalue issue for ~25% of my subjects. This
> results in the binary not exporting anything to the amica output dir, stopping
> the main loop prematurely.
> 
> 
> 
> 
> 
> Before running AMICA, the data is fairly aggressively cleaned:
> 
> 
> 1) PREP pipeline
> 
> 
> 2) remove mastoids & PREP-interpolated chans for rank reduction
> 
> 
> 3) 1hz hi-pass
> 
> 
> 4) epoch no baseline correction
> 
> 
> 5) epoch rejection (ch means deviation, variance, max amplitude dif > 2.5 SDs)
> 
> 
> 
> 
> 
> Not sure what else I can do to clean the data to make the eigenvalues positive.
> 
> 
> 
> 
> 
> I'm using Biosemi 128ch which is known for dynamic range issues, but I run
> everything in double. Not sure if demeaning each channel would help since it's
> already hi-passed.
> 
> 
> 
> 
> 
> Also, not sure if it matters, but AMICA seems to do dimension reduction despite
> me removing channels to make up for 'robust' reference rank reduction.
> 
> 
> 
> 
> 
> For the subjects that do run on AMICA, the ICs seem a lot cleaner than Infomax,
> which makes me want to stick to AMICA.
> 
> 
> 
> 
> 
> Bad subject log example:
> 
> 
> 
> 
> 1 : data = -2.3969683647155762 -2.910743236541748
> getting the mean ...
> mean = -7.73083349593588626E-2 -8.98852135101791128E-2 -0.17064473790401868
> subtracting the mean ...
> getting the sphering matrix ...
> cnt = 706560
> doing eig nx = 128 lwork = 163840
> minimum eigenvalues = -4.02618476752492072E-14 0.59534647773064309
> 0.66105027982216646
> maximum eigenvalues = 3718.0696499000956 2980.9762500746847 1012.6027880321443
> num eigs kept = 127
> numeigs = 127
> 
> 
> 
> 
> 
> Good subject log example:
> 
> 
> 
> 
> 1 : data = 3.1855385303497314 5.7855358123779297
> getting the mean ...
> mean = -0.38155908557715745 -0.27761248863920301 -0.3608881566308772
> subtracting the mean ...
> getting the sphering matrix ...
> cnt = 703488
> doing eig nx = 130 lwork = 169000
> minimum eigenvalues = 1.35676859295476523E-13 0.80288149429025613
> 1.1256218532296671
> maximum eigenvalues = 9749.2425686202987 1277.5793884179475 700.98046655297128
> num eigs kept = 129
> numeigs = 129
> 
> 
> 
> 
> 
> Many many thanks for the continued help!
> 
> 
> –Kevin
> 
> 
> 
> 
> 
> --
> 
> 
> Kevin Alastair M. Tan
> 
> 
> Lab Manager/Research Assistant
> 
> 
> Department of Psychology & Center for the Neural Basis of Cognition
> 
> 
> Carnegie Mellon University
> 
> 
> 
> 
> 
> Baker Hall 434 | kevintan at cmu.edu | tarrlab.org/kevintan
> 
> 
> 
> 
> 
> On Fri, Aug 21, 2015 at 7:36 PM, Makoto Miyakoshi < mmiyakoshi at ucsd.edu > wrote:
> 
> 
> Dear Kevin and Jason,
> 
> In the Figure 1 of the following paper, you can see an example of the shift of
> log likelihood of AMICA model along with iteration.
> 
> 
> 
> 
> 
> Rissling AJ, Miyakoshi M, Sugar CA, Braff DL, Makeig S, Light GA. (2014).
> Cortical substrates and functional correlates of auditory deviance processing
> deficits in schizophrenia. Neuroimage Clin. Oct 01; 6 424-437
> 
> 
> 
> 
> 
> You can see that after 700 iterations there is no 'jump' any more, which may
> correspond to what Jason says reaching to the 'noise floor'. In the beta
> version (?) of AMICA we use here in SCCN, it has a convergence criterion and
> usually stops at around 1000 iterations (smallest I saw was around 700, maximum
> 1500).
> 
> 
> 
> 
> 
> Kevin, your questions are always very interesting and I learn a lot from them.
> Thank you Jason for your answers and sharing knowledge.
> 
> 
> 
> 
> 
> Makoto
> 
> 
> 
> 
> 
> On Mon, Aug 17, 2015 at 4:35 PM, Jason Palmer < japalmer29 at gmail.com > wrote:
> 
> 
> 
> 
> 
> Hi Kevin,
> 
> 
> 
> The Infomax wchange is actually the weight change TIMES the lrate, which is
> going to 1e-7. So the actual final wchange for extended infomax is 1e7 *
> wchange.
> 
> 
> 
> For Amica, if the nd weight change gets down to the 10^-5 magnitude, that is
> usually about the best you can expect with the large number of parameters being
> estimated and the finite computer precision. How small it can get depends on
> the number of samples you have compared to the number of channels. More
> channels = more parameters (nchan^2) = relatively little data = “noisier”
> convergence. More data = better determined optimum = less noisy convergence =
> smaller nd. For 64 channels with 100,000 samples, an nd of 10^-5 sounds
> appropriate.
> 
> 
> 
> However you can change “maxiter” from the default 2000, using the ‘maxiter’
> keyword. This LL should continue to increase and the nd decrease (or at least
> not increase) beyond 2000 iterations, but not significantly. There should be a
> weight change “noise floor” reached, where the LL continues to increase by less
> and less, with possible reductions in lrate, and the nd hovers around the
> “noise floor”.
> 
> 
> 
> Best,
> 
> Jason
> 
> 
> 
> From: Kevin Tan [mailto: kevintan at cmu.edu ]
> Sent: Monday, August 17, 2015 4:21 PM
> To: japalmer at ucsd.edu ; EEGLAB List
> Subject: Re: AMICA lrate gets stuck
> 
> 
> 
> 
> 
> Hi Jason,
> 
> 
> 
> 
> 
> Thanks so much for the detailed response, really helps clarify what drives the
> lrate changes between the two implementations.
> 
> 
> 
> 
> 
> However, for the same dataset processed the same way, AMICA yields higher
> wchange at last iteration (0.0000464763) versus extended Infomax (0.000000).
> 
> 
> 
> 
> 
> What are some reasons for this discrepancy, and what can I do improve it? Or is
> weight change between the two implementations not comparable? The entire AMICA
> log is linked in original post if that helps.
> 
> 
> 
> 
> 
> Thanks again,
> 
> 
> Kevin
> 
> 
> 
> 
> 
> --
> 
> 
> Kevin Alastair M. Tan
> 
> 
> Lab Manager/Research Assistant
> 
> 
> Department of Psychology & Center for the Neural Basis of Cognition
> 
> 
> Carnegie Mellon University
> 
> 
> 
> 
> 
> Baker Hall 434 | kevintan at cmu.edu | tarrlab.org/kevintan
> 
> 
> 
> 
> 
> On Mon, Aug 17, 2015 at 7:06 PM, Jason Palmer < japalmer29 at gmail.com > wrote:
> 
> 
> Hi Kevin,
> 
> 
> 
> The Amica lrate is not supposed to decrease. The algorithm is a more typical
> gradient descent / Newton optimization algorithm, as opposed to the Infomax
> implementation in runica.m, which uses a type of simulated annealing, deciding
> whether to reduce the learning rate based on the angle between recent update
> directions. The idea there is that this angle will be small when the algorithm
> is near an optimum, as though it is “heading right for it”, so the lrate gets
> reduced if the algorithm is moving “erratically” with a large angle between
> consecutive directions, and doesn’t get reduced if the estimate is “moving
> smoothly”. In practice, this annealing method usually ends up in fact reducing
> the learning rate continuously until it reaches the preset minimum, which
> usually happens at around 500 iterations (500 reductions). I.e. the angle is
> never actually small, so the stopping condition is essentially a maximum number
> of iterations, with the updates being of smaller and smaller magnitude due to
> the lrate decreasing.
> 
> 
> 
> Amica only reduces the lrate if the likelihood decreases. In theory, with a
> reasonable optimum, an optimization algorithm should be able to converge
> without reducing the learning rate. The convergence is measured by the weight
> change (the nd in the amica output) independently of the lrate. That is, the
> weight change should theoretically decrease to zero with a constant
> (sufficiently small) lrate—the higher the better since higher lrate means
> faster convergence. A potential issue with the runica Infomax is early
> convergence if you are starting far from the optimum. Fortunately the optimum
> is usually not far from the “sphering” starting point, so 500 iterations is
> usually enough to converge (even with decreasing lrate).
> 
> 
> 
> So in Amica, the convergence is judged by the “nd”, not the lrate. The lrate
> should be ideally be 0.5 or 1.0, and the LL should be increasing, and the nd
> should be decreasing to zero.
> 
> 
> 
> Hope that is helpful.
> 
> 
> 
> Best,
> 
> Jason
> 
> 
> 
> 
> 
> From: Kevin Tan [mailto: kevintan at cmu.edu ]
> Sent: Monday, August 17, 2015 2:31 PM
> To: jason at sccn.ucsd.edu ; EEGLAB List
> Subject: AMICA lrate gets stuck
> 
> 
> 
> 
> 
> Hi Dr. Palmer & EEGLAB list,
> 
> 
> 
> 
> 
> I'm trying out AMICA for artifact rejection and DIPFIT. In my tests, the lrate
> consistently gets stuck at 0.5, stopping only due to max iteration limit. This
> does not happen with extended Infomax.
> 
> 
> 
> 
> 
> This happens whether I use the cluster (128 threads) or a normal PC (4 threads).
> I run AMICA 'locally' as it's called within a matlab script already run via
> PBS, not sure if that makes a difference.
> 
> 
> 
> 
> 
> Here's the AMICA test stream:
> 
> 
> - PREP pipeline
> 
> 
> - Remove PREP-interpolated channels
> 
> 
> - Remove 1 additional channel for rank consistency
> 
> 
> - 1hz FIR hi-pass
> 
> 
> - Epoch -500 to 1000ms no baseline correction
> 
> 
> - Epoch rejection
> 
> 
> - AMICA (using EEG(:,:) -- is it ok to concatenate epochs like this?)
> 
> 
> 
> 
> 
> Here's the output log (using the cluster):
> 
> 
> https://cmu.box.com/s/t7j3shmwjj1wj8to80au8mdm6b5676rh
> 
> 
> 
> 
> 
> Many thanks,
> 
> 
> Kevin
> 
> 
> --
> 
> 
> Kevin Alastair M. Tan
> 
> 
> Lab Manager/Research Assistant
> 
> 
> Department of Psychology & Center for the Neural Basis of Cognition
> 
> 
> Carnegie Mellon University
> 
> 
> 
> 
> 
> Baker Hall 434 | kevintan at cmu.edu | tarrlab.org/kevintan
> 
> 
> 
> 
> 
> 
> 
> 
> _______________________________________________
> Eeglablist page: http://sccn.ucsd.edu/eeglab/eeglabmail.html
> To unsubscribe, send an empty email to eeglablist-unsubscribe at sccn.ucsd.edu
> For digest mode, send an email with the subject "set digest mime" to
> eeglablist-request at sccn.ucsd.edu
> 
> 
> 
> 
> 
> 
> 
> 
> 
> --
> 
> 
> Makoto Miyakoshi
> Swartz Center for Computational Neuroscience
> Institute for Neural Computation, University of California San Diego
> 
> 
> 
> 
> _______________________________________________
> Eeglablist page: http://sccn.ucsd.edu/eeglab/eeglabmail.html
> To unsubscribe, send an empty email to eeglablist-unsubscribe at sccn.ucsd.edu
> For digest mode, send an email with the subject "set digest mime" to
> eeglablist-request at sccn.ucsd.edu



More information about the eeglablist mailing list