[Eeglablist] AMICA lrate gets stuck

Makoto Miyakoshi mmiyakoshi at ucsd.edu
Fri Aug 21 16:36:56 PDT 2015


Dear Kevin and Jason,

In the Figure 1 of the following paper, you can see an example of the shift
of log likelihood of AMICA model along with iteration.

Rissling AJ, Miyakoshi M, Sugar CA, Braff DL, Makeig S, Light GA.
(2014). Cortical substrates and functional correlates of auditory deviance
processing deficits in schizophrenia. Neuroimage Clin. Oct 01; 6 424-437

You can see that after 700 iterations there is no 'jump' any more, which
may correspond to what Jason says reaching to the 'noise floor'. In the
beta version (?) of AMICA we use here in SCCN, it has a convergence
criterion and usually stops at around 1000 iterations (smallest I saw was
around 700, maximum 1500).

Kevin, your questions are always very interesting and I learn a lot from
them. Thank you Jason for your answers and sharing knowledge.

Makoto

On Mon, Aug 17, 2015 at 4:35 PM, Jason Palmer <japalmer29 at gmail.com> wrote:

> Hi Kevin,
>
>
>
> The Infomax wchange is actually the weight change TIMES the lrate, which
> is going to 1e-7. So the actual final wchange for extended infomax is 1e7 *
> wchange.
>
>
>
> For Amica, if the nd weight change gets down to the 10^-5 magnitude, that
> is usually about the best you can expect with the large number of
> parameters being estimated and the finite computer precision. How small it
> can get depends on the number of samples you have compared to the number of
> channels. More channels = more parameters (nchan^2) = relatively little
> data = “noisier” convergence. More data = better determined optimum = less
> noisy convergence = smaller nd. For 64 channels with 100,000 samples, an nd
> of 10^-5 sounds appropriate.
>
>
>
> However you can change “maxiter” from the default 2000, using the
> ‘maxiter’ keyword. This LL should continue to increase and the nd decrease
> (or at least not increase) beyond 2000 iterations, but not significantly.
> There should be a weight change “noise floor” reached, where the LL
> continues to increase by less and less, with possible reductions in lrate,
> and the nd hovers around the “noise floor”.
>
>
>
> Best,
>
> Jason
>
>
>
> *From:* Kevin Tan [mailto:kevintan at cmu.edu]
> *Sent:* Monday, August 17, 2015 4:21 PM
> *To:* japalmer at ucsd.edu; EEGLAB List
> *Subject:* Re: AMICA lrate gets stuck
>
>
>
> Hi Jason,
>
>
>
> Thanks so much for the detailed response, really helps clarify what drives
> the lrate changes between the two implementations.
>
>
>
> However, for the same dataset processed the same way, AMICA yields higher
> wchange at last iteration (0.0000464763) versus extended Infomax
> (0.000000).
>
>
>
> What are some reasons for this discrepancy, and what can I do improve it?
> Or is weight change between the two implementations not comparable? The
> entire AMICA log is linked in original post if that helps.
>
>
>
> Thanks again,
>
> Kevin
>
>
> --
>
> Kevin Alastair M. Tan
>
> Lab Manager/Research Assistant
>
> Department of Psychology & Center for the Neural Basis of Cognition
>
> Carnegie Mellon University
>
>
>
> Baker Hall 434
> <https://www.google.com/maps/place/40%C2%B026%2729.5%22N+79%C2%B056%2744.0%22W/@40.4414869,-79.9455701,61m/data=!3m1!1e3!4m2!3m1!1s0x0:0x0>
>  | kevintan at cmu.edu | tarrlab.org/kevintan
> <http://tarrlabwiki.cnbc.cmu.edu/index.php/KevinTan>
>
>
>
> On Mon, Aug 17, 2015 at 7:06 PM, Jason Palmer <japalmer29 at gmail.com>
> wrote:
>
> Hi Kevin,
>
>
>
> The Amica lrate is not supposed to decrease. The algorithm is a more
> typical gradient descent / Newton optimization algorithm, as opposed to the
> Infomax implementation in runica.m, which uses a type of simulated
> annealing, deciding whether to reduce the learning rate based on the angle
> between recent update directions. The idea there is that this angle will be
> small when the algorithm is near an optimum, as though it is “heading right
> for it”, so the lrate gets reduced if the algorithm is moving “erratically”
> with a large angle between consecutive directions, and doesn’t get reduced
> if the estimate is “moving smoothly”. In practice, this annealing method
> usually ends up in fact reducing the learning rate continuously until it
> reaches the preset minimum, which usually happens at around 500 iterations
> (500 reductions). I.e. the angle is never actually small, so the stopping
> condition is essentially a maximum number of iterations, with the updates
> being of smaller and smaller magnitude due to the lrate decreasing.
>
>
>
> Amica only reduces the lrate if the likelihood decreases. In theory, with
> a reasonable optimum, an optimization algorithm should be able to converge
> without reducing the learning rate. The convergence is measured by the
> weight change (the nd in the amica output) independently of the lrate. That
> is, the weight change should theoretically decrease to zero with a constant
> (sufficiently small) lrate—the higher the better since higher lrate means
> faster convergence. A potential issue with the runica Infomax is early
> convergence if you are starting far from the optimum. Fortunately the
> optimum is usually not far from the “sphering” starting point, so 500
> iterations is usually enough to converge (even with decreasing lrate).
>
>
>
> So in Amica, the convergence is judged by the “nd”, not the lrate. The
> lrate should be ideally be 0.5 or 1.0, and the LL should be increasing, and
> the nd should be decreasing to zero.
>
>
>
> Hope that is helpful.
>
>
>
> Best,
>
> Jason
>
>
>
>
>
> *From:* Kevin Tan [mailto:kevintan at cmu.edu]
> *Sent:* Monday, August 17, 2015 2:31 PM
> *To:* jason at sccn.ucsd.edu; EEGLAB List
> *Subject:* AMICA lrate gets stuck
>
>
>
> Hi Dr. Palmer & EEGLAB list,
>
>
>
> I'm trying out AMICA for artifact rejection and DIPFIT. In my tests, the
> lrate consistently gets stuck at 0.5, stopping only due to max iteration
> limit. This does not happen with extended Infomax.
>
>
>
> This happens whether I use the cluster (128 threads) or a normal PC (4
> threads). I run AMICA 'locally' as it's called within a matlab script
> already run via PBS, not sure if that makes a difference.
>
>
>
> Here's the AMICA test stream:
>
> - PREP pipeline
>
> - Remove PREP-interpolated channels
>
> - Remove 1 additional channel for rank consistency
>
> - 1hz FIR hi-pass
>
> - Epoch -500 to 1000ms no baseline correction
>
> - Epoch rejection
>
> - AMICA (using EEG(:,:) -- is it ok to concatenate epochs like this?)
>
>
>
> Here's the output log (using the cluster):
>
> https://cmu.box.com/s/t7j3shmwjj1wj8to80au8mdm6b5676rh
>
>
>
> Many thanks,
>
> Kevin
>
> --
>
> Kevin Alastair M. Tan
>
> Lab Manager/Research Assistant
>
> Department of Psychology & Center for the Neural Basis of Cognition
>
> Carnegie Mellon University
>
>
>
> Baker Hall 434
> <https://www.google.com/maps/place/40%C2%B026%2729.5%22N+79%C2%B056%2744.0%22W/@40.4414869,-79.9455701,61m/data=!3m1!1e3!4m2!3m1!1s0x0:0x0>
>  | kevintan at cmu.edu | tarrlab.org/kevintan
> <http://tarrlabwiki.cnbc.cmu.edu/index.php/KevinTan>
>
>
>
> _______________________________________________
> Eeglablist page: http://sccn.ucsd.edu/eeglab/eeglabmail.html
> To unsubscribe, send an empty email to
> eeglablist-unsubscribe at sccn.ucsd.edu
> For digest mode, send an email with the subject "set digest mime" to
> eeglablist-request at sccn.ucsd.edu
>



-- 
Makoto Miyakoshi
Swartz Center for Computational Neuroscience
Institute for Neural Computation, University of California San Diego
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://sccn.ucsd.edu/pipermail/eeglablist/attachments/20150821/09d9946e/attachment.html>


More information about the eeglablist mailing list