<div dir="ltr">Hi Jason, <div><br></div><div>Thanks again for sending me this fix. </div><div><br></div><div>For my short datasets, I'm using your fix for significant dimension reduction (due to lack of datapoints). The resulting ICs aren't quite ordered in descending amplitude: some bottom ICs seem almost as strong as first ICs. Also, the "cognitive" ICs are spread all over, whereas without dim reduction they're mostly at the top. Is this normal?</div><div><br></div><div>My implementation:</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><font face="monospace, monospace">nmIC = floor(sqrt(size(EEG.data(:, :), 2) / 30));<br></font><font face="monospace, monospace">[U, D] = eig(cov(EEG.data(:, :)'));<br></font><font face="monospace, monospace">U = fliplr(U);<br></font><font face="monospace, monospace">D = rot90(D, 2);<br></font><font face="monospace, monospace">dd = diag(D);<br></font><font face="monospace, monospace">Sph = diag(sqrt(1./dd(1:nmIC))) * U(:, 1:nmIC)'; <br></font><font face="monospace, monospace">newdat = Sph * EEG.data(:, :);<br></font><span style="color:rgb(0,0,0)"><font face="monospace, monospace">[W,S,mods] = runamica12(newdat, ....);</font></span></blockquote><div><br></div><div>Many thanks for your continued help!</div><div>–Kevin </div><div class="gmail_extra"><div><div><div dir="ltr"><div><font size="1" face="arial, helvetica, sans-serif">--</font></div><font size="1" face="arial, helvetica, sans-serif">Kevin Alastair M. Tan</font><div><font size="1" face="arial, helvetica, sans-serif">Lab Manager/Research Assistant<br></font><div><font size="1" face="arial, helvetica, sans-serif">Department of Psychology & Center for the Neural Basis of Cognition</font></div><div><font size="1" face="arial, helvetica, sans-serif">Carnegie Mellon University</font></div><div><font size="1" face="arial, helvetica, sans-serif"><br></font><div><div><font size="1" face="arial, helvetica, sans-serif"><a href="https://www.google.com/maps/place/40%C2%B026%2729.5%22N+79%C2%B056%2744.0%22W/@40.4414869,-79.9455701,61m/data=!3m1!1e3!4m2!3m1!1s0x0:0x0" target="_blank">Baker Hall 434</a> | <a href="mailto:kevintan@cmu.edu" target="_blank">kevintan@cmu.edu</a> | <a href="http://tarrlabwiki.cnbc.cmu.edu/index.php/KevinTan" target="_blank">tarrlab.org/kevintan</a></font></div></div></div></div></div></div></div>
<br><div class="gmail_quote">On Tue, Aug 25, 2015 at 12:28 AM, Jason Palmer <span dir="ltr"><<a href="mailto:japalmer29@gmail.com" target="_blank">japalmer29@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div lang="EN-US" link="blue" vlink="purple"><div><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black">Sorry this line should be:<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"> >> [W,S,mods] = runamica12(newdat, ....);<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"><u></u> <u></u></span></p><div><div style="border-style:solid none none;border-top-color:rgb(181,196,223);border-top-width:1pt;padding:3pt 0in 0in"><p class="MsoNormal"><b><span style="font-size:10pt;font-family:Tahoma,sans-serif">From:</span></b><span style="font-size:10pt;font-family:Tahoma,sans-serif"> Jason Palmer [mailto:<a href="mailto:japalmer@ucsd.edu" target="_blank">japalmer@ucsd.edu</a>] <br><b>Sent:</b> Monday, August 24, 2015 9:17 PM<br><b>To:</b> 'Kevin Tan'; '<a href="mailto:mmiyakoshi@ucsd.edu" target="_blank">mmiyakoshi@ucsd.edu</a>'; 'EEGLAB List'<br><b>Subject:</b> RE: [Eeglablist] AMICA lrate gets stuck<u></u><u></u></span></p></div></div><span><p class="MsoNormal"><u></u> <u></u></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black">Hi Kevin,<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black">Sorry, there is a bug in the code tries to take the sqrt of the negative eigenvalue (even though that dimension is being removed) making the LL=NaN and aborting. The eigenvalue is actually essentially zero, resulting from rank deficiency likely due to re-referencing, so more cleaning won’t necessarily change the arbitrarily small value to positive, unless you increase the dimensionality of the data. I’m currently fixing this bug along with some other sphering and PCA issues and will release debugged versions soon.<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black">For now you can do the work around of sphering before Amica, e.g.<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"> >> [U,D] = eig(cov(EEG.data(:,:)’));<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"> >> U = fliplr(U); D = fliplr(flipud(D)); % make descending order<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"> </span><span lang="DE" style="font-size:11pt;font-family:Calibri,sans-serif;color:black">>> dd = diag(D); numeig = sum(dd > 1e-9);<u></u><u></u></span></p><p class="MsoNormal"><span lang="DE" style="font-size:11pt;font-family:Calibri,sans-serif;color:black"> >> Sph = diag(sqrt(1./dd(1:numeig))) * U(:,1:numeig)‘;<u></u><u></u></span></p><p class="MsoNormal"><span lang="DE" style="font-size:11pt;font-family:Calibri,sans-serif;color:black"> </span><span lang="IT" style="font-size:11pt;font-family:Calibri,sans-serif;color:black">>> newdat = Sph * EEG.data(:,:); % reduce to numeigs dimensions<u></u><u></u></span></p><p class="MsoNormal"><span lang="IT" style="font-size:11pt;font-family:Calibri,sans-serif;color:black"> </span><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black">>> [W,S,mods] = runamica12(EEG.data(:,:), ....);<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"> >> EEG.icasphere = S*Sph;<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"> >> EEG.icaweights = W;<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black">This should run amica on full rank data an avoid the negative near zero eigenvalue problem until it is fixed in the Amica code.<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"><u></u> <u></u></span></p></span><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black">Best,<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black">Jason<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"><u></u> <u></u></span></p><p class="MsoNormal"><b><span style="font-size:10pt;font-family:Tahoma,sans-serif">From:</span></b><span style="font-size:10pt;font-family:Tahoma,sans-serif"> Kevin Tan [<a href="mailto:kevintan@cmu.edu" target="_blank">mailto:kevintan@cmu.edu</a>] <br><span><b>Sent:</b> Monday, August 24, 2015 6:39 PM<br><b>To:</b> Jason Palmer; <a href="mailto:mmiyakoshi@ucsd.edu" target="_blank">mmiyakoshi@ucsd.edu</a>; EEGLAB List<br></span><b>Subject:</b> Re: [Eeglablist] AMICA lrate gets stuck<u></u><u></u></span></p><div><div><p class="MsoNormal"><u></u> <u></u></p><div><p class="MsoNormal">Hi Jason, <u></u><u></u></p><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">I'm running into a negative min eigenvalue issue for ~25% of my subjects. This results in the binary not exporting anything to the amica output dir, stopping the main loop prematurely.<u></u><u></u></p></div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">Before running AMICA, the data is fairly aggressively cleaned: <u></u><u></u></p></div><div><p class="MsoNormal">1) PREP pipeline<u></u><u></u></p></div><div><p class="MsoNormal">2) remove mastoids & PREP-interpolated chans for rank reduction<u></u><u></u></p></div><div><p class="MsoNormal">3) 1hz hi-pass <u></u><u></u></p></div><div><p class="MsoNormal">4) epoch no baseline correction <u></u><u></u></p></div><div><p class="MsoNormal">5) epoch rejection (ch means deviation, variance, max amplitude dif > 2.5 SDs) <u></u><u></u></p></div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">Not sure what else I can do to clean the data to make the eigenvalues positive.<u></u><u></u></p></div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">I'm using Biosemi 128ch which is known for dynamic range issues, but I run everything in double. Not sure if demeaning each channel would help since it's already hi-passed.<u></u><u></u></p></div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">Also, not sure if it matters, but AMICA seems to do dimension reduction despite me removing channels to make up for 'robust' reference rank reduction.<u></u><u></u></p></div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">For the subjects that do run on AMICA, the ICs seem a lot cleaner than Infomax, which makes me want to stick to AMICA. <u></u><u></u></p></div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">Bad subject log example:<u></u><u></u></p></div><div><blockquote style="border-style:none none none solid;border-left-color:rgb(204,204,204);border-left-width:1pt;padding:0in 0in 0in 6pt;margin:5pt 0in 5pt 4.8pt"><p class="MsoNormal"><span style="font-family:'Courier New'"> 1 : data = -2.3969683647155762 -2.910743236541748<br> getting the mean ...<br> mean = -7.73083349593588626E-2 -8.98852135101791128E-2 -0.17064473790401868<br> subtracting the mean ...<br> getting the sphering matrix ...<br> cnt = 706560<br> doing eig nx = 128 lwork = 163840<br> minimum eigenvalues = -4.02618476752492072E-14 0.59534647773064309 0.66105027982216646<br> maximum eigenvalues = 3718.0696499000956 2980.9762500746847 1012.6027880321443<br> num eigs kept = 127<br> numeigs = 127</span><u></u><u></u></p></blockquote></div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">Good subject log example:<u></u><u></u></p></div><div><blockquote style="border-style:none none none solid;border-left-color:rgb(204,204,204);border-left-width:1pt;padding:0in 0in 0in 6pt;margin:5pt 0in 5pt 4.8pt"><p class="MsoNormal"><span style="font-family:'Courier New'">1 : data = 3.1855385303497314 5.7855358123779297<br> getting the mean ...<br> mean = -0.38155908557715745 -0.27761248863920301 -0.3608881566308772<br> subtracting the mean ...<br> getting the sphering matrix ...<br> cnt = 703488<br> doing eig nx = 130 lwork = 169000<br> minimum eigenvalues = 1.35676859295476523E-13 0.80288149429025613 1.1256218532296671<br> maximum eigenvalues = 9749.2425686202987 1277.5793884179475 700.98046655297128<br> num eigs kept = 129<br> numeigs = 129</span><u></u><u></u></p></blockquote><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">Many many thanks for the continued help!<u></u><u></u></p></div><div><p class="MsoNormal">–Kevin <u></u><u></u></p></div></div></div><div><p class="MsoNormal"><br clear="all"><u></u><u></u></p><div><div><div><div><p class="MsoNormal"><span style="font-size:7.5pt;font-family:Arial,sans-serif">--</span><u></u><u></u></p></div><p class="MsoNormal"><span style="font-size:7.5pt;font-family:Arial,sans-serif">Kevin Alastair M. Tan</span><u></u><u></u></p><div><p class="MsoNormal"><span style="font-size:7.5pt;font-family:Arial,sans-serif">Lab Manager/Research Assistant</span><u></u><u></u></p><div><p class="MsoNormal"><span style="font-size:7.5pt;font-family:Arial,sans-serif">Department of Psychology & Center for the Neural Basis of Cognition</span><u></u><u></u></p></div><div><p class="MsoNormal"><span style="font-size:7.5pt;font-family:Arial,sans-serif">Carnegie Mellon University</span><u></u><u></u></p></div><div><p class="MsoNormal"><u></u> <u></u></p><div><div><p class="MsoNormal"><span style="font-size:7.5pt;font-family:Arial,sans-serif"><a href="https://www.google.com/maps/place/40%C2%B026%2729.5%22N+79%C2%B056%2744.0%22W/@40.4414869,-79.9455701,61m/data=!3m1!1e3!4m2!3m1!1s0x0:0x0" target="_blank">Baker Hall 434</a> | <a href="mailto:kevintan@cmu.edu" target="_blank">kevintan@cmu.edu</a> | <a href="http://tarrlabwiki.cnbc.cmu.edu/index.php/KevinTan" target="_blank">tarrlab.org/kevintan</a></span><u></u><u></u></p></div></div></div></div></div></div></div><p class="MsoNormal"><u></u> <u></u></p><div><p class="MsoNormal">On Fri, Aug 21, 2015 at 7:36 PM, Makoto Miyakoshi <<a href="mailto:mmiyakoshi@ucsd.edu" target="_blank">mmiyakoshi@ucsd.edu</a>> wrote:<u></u><u></u></p><div><p class="MsoNormal">Dear Kevin and Jason,<br><br>In the Figure 1 of the following paper, you can see an example of the shift of log likelihood of AMICA model along with iteration. <u></u><u></u></p><div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">Rissling AJ, Miyakoshi M, Sugar CA, Braff DL, Makeig S, Light GA. (2014). Cortical substrates and functional correlates of auditory deviance processing deficits in schizophrenia. Neuroimage Clin. Oct 01; 6 424-437<u></u><u></u></p></div></div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">You can see that after 700 iterations there is no 'jump' any more, which may correspond to what Jason says reaching to the 'noise floor'. In the beta version (?) of AMICA we use here in SCCN, it has a convergence criterion and usually stops at around 1000 iterations (smallest I saw was around 700, maximum 1500).<u></u><u></u></p></div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">Kevin, your questions are always very interesting and I learn a lot from them. Thank you Jason for your answers and sharing knowledge.<u></u><u></u></p></div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">Makoto<u></u><u></u></p></div></div><div><p class="MsoNormal"><u></u> <u></u></p><div><div><div><p class="MsoNormal">On Mon, Aug 17, 2015 at 4:35 PM, Jason Palmer <<a href="mailto:japalmer29@gmail.com" target="_blank">japalmer29@gmail.com</a>> wrote:<u></u><u></u></p></div></div><blockquote style="border-style:none none none solid;border-left-color:rgb(204,204,204);border-left-width:1pt;padding:0in 0in 0in 6pt;margin:5pt 0in 5pt 4.8pt"><div><div><div><div><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black">Hi Kevin,</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black">The Infomax wchange is actually the weight change TIMES the lrate, which is going to 1e-7. So the actual final wchange for extended infomax is 1e7 * wchange.</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black">For Amica, if the nd weight change gets down to the 10^-5 magnitude, that is usually about the best you can expect with the large number of parameters being estimated and the finite computer precision. How small it can get depends on the number of samples you have compared to the number of channels. More channels = more parameters (nchan^2) = relatively little data = “noisier” convergence. More data = better determined optimum = less noisy convergence = smaller nd. For 64 channels with 100,000 samples, an nd of 10^-5 sounds appropriate.</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black">However you can change “maxiter” from the default 2000, using the ‘maxiter’ keyword. This LL should continue to increase and the nd decrease (or at least not increase) beyond 2000 iterations, but not significantly. There should be a weight change “noise floor” reached, where the LL continues to increase by less and less, with possible reductions in lrate, and the nd hovers around the “noise floor”.</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black">Best,</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black">Jason</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"> </span><u></u><u></u></p><p class="MsoNormal"><b><span style="font-size:10pt;font-family:Tahoma,sans-serif">From:</span></b><span style="font-size:10pt;font-family:Tahoma,sans-serif"> Kevin Tan [mailto:<a href="mailto:kevintan@cmu.edu" target="_blank">kevintan@cmu.edu</a>] <br><b>Sent:</b> Monday, August 17, 2015 4:21 PM<br><b>To:</b> <a href="mailto:japalmer@ucsd.edu" target="_blank">japalmer@ucsd.edu</a>; EEGLAB List<br><b>Subject:</b> Re: AMICA lrate gets stuck</span><u></u><u></u></p><div><div><p class="MsoNormal"> <u></u><u></u></p><div><p class="MsoNormal">Hi Jason, <u></u><u></u></p><div><p class="MsoNormal"> <u></u><u></u></p></div><div><p class="MsoNormal">Thanks so much for the detailed response, really helps clarify what drives the lrate changes between the two implementations. <u></u><u></u></p></div><div><p class="MsoNormal"> <u></u><u></u></p></div><div><p class="MsoNormal">However, for the same dataset processed the same way, AMICA yields higher wchange at last iteration (0.0000464763) versus extended Infomax (0.000000). <u></u><u></u></p></div><div><p class="MsoNormal"> <u></u><u></u></p></div><div><p class="MsoNormal">What are some reasons for this discrepancy, and what can I do improve it? Or is weight change between the two implementations not comparable? The entire AMICA log is linked in original post if that helps. <u></u><u></u></p></div><div><p class="MsoNormal"> <u></u><u></u></p></div><div><p class="MsoNormal">Thanks again, <u></u><u></u></p></div><div><p class="MsoNormal">Kevin<u></u><u></u></p></div></div><div><p class="MsoNormal"><br clear="all"><u></u><u></u></p><div><div><div><div><p class="MsoNormal"><span style="font-size:7.5pt;font-family:Arial,sans-serif">--</span><u></u><u></u></p></div><p class="MsoNormal"><span style="font-size:7.5pt;font-family:Arial,sans-serif">Kevin Alastair M. Tan</span><u></u><u></u></p><div><p class="MsoNormal"><span style="font-size:7.5pt;font-family:Arial,sans-serif">Lab Manager/Research Assistant</span><u></u><u></u></p><div><p class="MsoNormal"><span style="font-size:7.5pt;font-family:Arial,sans-serif">Department of Psychology & Center for the Neural Basis of Cognition</span><u></u><u></u></p></div><div><p class="MsoNormal"><span style="font-size:7.5pt;font-family:Arial,sans-serif">Carnegie Mellon University</span><u></u><u></u></p></div><div><p class="MsoNormal"> <u></u><u></u></p><div><div><p class="MsoNormal"><span style="font-size:7.5pt;font-family:Arial,sans-serif"><a href="https://www.google.com/maps/place/40%C2%B026%2729.5%22N+79%C2%B056%2744.0%22W/@40.4414869,-79.9455701,61m/data=!3m1!1e3!4m2!3m1!1s0x0:0x0" target="_blank">Baker Hall 434</a> | <a href="mailto:kevintan@cmu.edu" target="_blank">kevintan@cmu.edu</a> | <a href="http://tarrlabwiki.cnbc.cmu.edu/index.php/KevinTan" target="_blank">tarrlab.org/kevintan</a></span><u></u><u></u></p></div></div></div></div></div></div></div><p class="MsoNormal"> <u></u><u></u></p><div><p class="MsoNormal">On Mon, Aug 17, 2015 at 7:06 PM, Jason Palmer <<a href="mailto:japalmer29@gmail.com" target="_blank">japalmer29@gmail.com</a>> wrote:<u></u><u></u></p><div><div><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black">Hi Kevin,</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black">The Amica lrate is not supposed to decrease. The algorithm is a more typical gradient descent / Newton optimization algorithm, as opposed to the Infomax implementation in runica.m, which uses a type of simulated annealing, deciding whether to reduce the learning rate based on the angle between recent update directions. The idea there is that this angle will be small when the algorithm is near an optimum, as though it is “heading right for it”, so the lrate gets reduced if the algorithm is moving “erratically” with a large angle between consecutive directions, and doesn’t get reduced if the estimate is “moving smoothly”. In practice, this annealing method usually ends up in fact reducing the learning rate continuously until it reaches the preset minimum, which usually happens at around 500 iterations (500 reductions). I.e. the angle is never actually small, so the stopping condition is essentially a maximum number of iterations, with the updates being of smaller and smaller magnitude due to the lrate decreasing.</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black">Amica only reduces the lrate if the likelihood decreases. In theory, with a reasonable optimum, an optimization algorithm should be able to converge without reducing the learning rate. The convergence is measured by the weight change (the nd in the amica output) independently of the lrate. That is, the weight change should theoretically decrease to zero with a constant (sufficiently small) lrate—the higher the better since higher lrate means faster convergence. A potential issue with the runica Infomax is early convergence if you are starting far from the optimum. Fortunately the optimum is usually not far from the “sphering” starting point, so 500 iterations is usually enough to converge (even with decreasing lrate).</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black">So in Amica, the convergence is judged by the “nd”, not the lrate. The lrate should be ideally be 0.5 or 1.0, and the LL should be increasing, and the nd should be decreasing to zero.</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black">Hope that is helpful.</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black">Best,</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black">Jason</span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"> </span><u></u><u></u></p><p class="MsoNormal"><span style="font-size:11pt;font-family:Calibri,sans-serif;color:black"> </span><u></u><u></u></p><p class="MsoNormal"><b><span style="font-size:10pt;font-family:Tahoma,sans-serif">From:</span></b><span style="font-size:10pt;font-family:Tahoma,sans-serif"> Kevin Tan [mailto:<a href="mailto:kevintan@cmu.edu" target="_blank">kevintan@cmu.edu</a>] <br><b>Sent:</b> Monday, August 17, 2015 2:31 PM<br><b>To:</b> <a href="mailto:jason@sccn.ucsd.edu" target="_blank">jason@sccn.ucsd.edu</a>; EEGLAB List<br><b>Subject:</b> AMICA lrate gets stuck</span><u></u><u></u></p><div><div><p class="MsoNormal"> <u></u><u></u></p><div><div><p class="MsoNormal">Hi Dr. Palmer & EEGLAB list, <u></u><u></u></p></div><div><p class="MsoNormal"> <u></u><u></u></p></div><div><p class="MsoNormal">I'm trying out AMICA for artifact rejection and DIPFIT. In my tests, the lrate consistently gets stuck at 0.5, stopping only due to max iteration limit. This does not happen with extended Infomax. <u></u><u></u></p></div><div><p class="MsoNormal"> <u></u><u></u></p></div><div><p class="MsoNormal">This happens whether I use the cluster (128 threads) or a normal PC (4 threads). I run AMICA 'locally' as it's called within a matlab script already run via PBS, not sure if that makes a difference. <u></u><u></u></p></div><div><p class="MsoNormal"> <u></u><u></u></p></div><div><p class="MsoNormal">Here's the AMICA test stream:<u></u><u></u></p></div><div><p class="MsoNormal">- PREP pipeline<u></u><u></u></p></div><div><p class="MsoNormal">- Remove PREP-interpolated channels<u></u><u></u></p></div><div><p class="MsoNormal">- Remove 1 additional channel for rank consistency<u></u><u></u></p></div><div><p class="MsoNormal">- 1hz FIR hi-pass<u></u><u></u></p></div><div><p class="MsoNormal">- Epoch -500 to 1000ms no baseline correction<u></u><u></u></p></div><div><p class="MsoNormal">- Epoch rejection<u></u><u></u></p></div><div><p class="MsoNormal">- AMICA (using EEG(:,:) -- is it ok to concatenate epochs like this?)<u></u><u></u></p></div><div><p class="MsoNormal"> <u></u><u></u></p></div><div><p class="MsoNormal">Here's the output log (using the cluster):<u></u><u></u></p></div><p class="MsoNormal"><a href="https://cmu.box.com/s/t7j3shmwjj1wj8to80au8mdm6b5676rh" target="_blank">https://cmu.box.com/s/t7j3shmwjj1wj8to80au8mdm6b5676rh</a><u></u><u></u></p><div><p class="MsoNormal"> <u></u><u></u></p></div><div><p class="MsoNormal">Many thanks, <u></u><u></u></p></div><div><p class="MsoNormal">Kevin<u></u><u></u></p></div><div><div><div><div><div><p class="MsoNormal"><span style="font-size:7.5pt;font-family:Arial,sans-serif">--</span><u></u><u></u></p></div><p class="MsoNormal"><span style="font-size:7.5pt;font-family:Arial,sans-serif">Kevin Alastair M. Tan</span><u></u><u></u></p><div><p class="MsoNormal"><span style="font-size:7.5pt;font-family:Arial,sans-serif">Lab Manager/Research Assistant</span><u></u><u></u></p><div><p class="MsoNormal"><span style="font-size:7.5pt;font-family:Arial,sans-serif">Department of Psychology & Center for the Neural Basis of Cognition</span><u></u><u></u></p></div><div><p class="MsoNormal"><span style="font-size:7.5pt;font-family:Arial,sans-serif">Carnegie Mellon University</span><u></u><u></u></p></div><div><p class="MsoNormal"> <u></u><u></u></p><div><div><p class="MsoNormal"><span style="font-size:7.5pt;font-family:Arial,sans-serif"><a href="https://www.google.com/maps/place/40%C2%B026%2729.5%22N+79%C2%B056%2744.0%22W/@40.4414869,-79.9455701,61m/data=!3m1!1e3!4m2!3m1!1s0x0:0x0" target="_blank">Baker Hall 434</a> | <a href="mailto:kevintan@cmu.edu" target="_blank">kevintan@cmu.edu</a> | <a href="http://tarrlabwiki.cnbc.cmu.edu/index.php/KevinTan" target="_blank">tarrlab.org/kevintan</a></span><u></u><u></u></p></div></div></div></div></div></div></div></div></div></div></div></div></div></div><p class="MsoNormal"> <u></u><u></u></p></div></div></div></div></div><p class="MsoNormal"><u></u> <u></u></p></div></div><p class="MsoNormal">_______________________________________________<br>Eeglablist page: <a href="http://sccn.ucsd.edu/eeglab/eeglabmail.html" target="_blank">http://sccn.ucsd.edu/eeglab/eeglabmail.html</a><br>To unsubscribe, send an empty email to <a href="mailto:eeglablist-unsubscribe@sccn.ucsd.edu" target="_blank">eeglablist-unsubscribe@sccn.ucsd.edu</a><br>For digest mode, send an email with the subject "set digest mime" to <a href="mailto:eeglablist-request@sccn.ucsd.edu" target="_blank">eeglablist-request@sccn.ucsd.edu</a><u></u><u></u></p></blockquote></div><p class="MsoNormal"><span style="color:rgb(136,136,136)"><br><br clear="all"><span><u></u><u></u></span></span></p><div><p class="MsoNormal"><u></u> <u></u></p></div><p class="MsoNormal"><span><span style="color:rgb(136,136,136)">-- </span><u></u><u></u></span></p><div><div><p class="MsoNormal"><span style="color:rgb(136,136,136)">Makoto Miyakoshi<br>Swartz Center for Computational Neuroscience<br>Institute for Neural Computation, University of California San Diego</span><u></u><u></u></p></div></div></div></div><p class="MsoNormal"><u></u> <u></u></p></div></div></div></div></div></blockquote></div><br></div></div>