<div dir="ltr">Hi Jason, <div><br></div><div>Thanks so much for the detailed response, really helps clarify what drives the lrate changes between the two implementations. </div><div><br></div><div>However, for the same dataset processed the same way, AMICA yields higher wchange at last iteration (0.0000464763) versus extended Infomax (0.000000). </div><div><br></div><div>What are some reasons for this discrepancy, and what can I do improve it? Or is weight change between the two implementations not comparable? The entire AMICA log is linked in original post if that helps. </div><div><br></div><div>Thanks again, </div><div>Kevin</div></div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature"><div dir="ltr"><div><font size="1" face="arial, helvetica, sans-serif">--</font></div><font size="1" face="arial, helvetica, sans-serif">Kevin Alastair M. Tan</font><div><font size="1" face="arial, helvetica, sans-serif">Lab Manager/Research Assistant<br></font><div><font size="1" face="arial, helvetica, sans-serif">Department of Psychology & Center for the Neural Basis of Cognition</font></div><div><font size="1" face="arial, helvetica, sans-serif">Carnegie Mellon University</font></div><div><font size="1" face="arial, helvetica, sans-serif"><br></font><div><div><font size="1" face="arial, helvetica, sans-serif"><a href="https://www.google.com/maps/place/40%C2%B026%2729.5%22N+79%C2%B056%2744.0%22W/@40.4414869,-79.9455701,61m/data=!3m1!1e3!4m2!3m1!1s0x0:0x0" target="_blank">Baker Hall 434</a> | <a href="mailto:kevintan@cmu.edu" target="_blank">kevintan@cmu.edu</a> | <a href="http://tarrlabwiki.cnbc.cmu.edu/index.php/KevinTan" target="_blank">tarrlab.org/kevintan</a></font></div></div></div></div></div></div></div>
<br><div class="gmail_quote">On Mon, Aug 17, 2015 at 7:06 PM, Jason Palmer <span dir="ltr"><<a href="mailto:japalmer29@gmail.com" target="_blank">japalmer29@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div lang="EN-US" link="blue" vlink="purple"><div><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:black">Hi Kevin,<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:black"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:black">The Amica lrate is not supposed to decrease. The algorithm is a more typical gradient descent / Newton optimization algorithm, as opposed to the Infomax implementation in runica.m, which uses a type of simulated annealing, deciding whether to reduce the learning rate based on the angle between recent update directions. The idea there is that this angle will be small when the algorithm is near an optimum, as though it is “heading right for it”, so the lrate gets reduced if the algorithm is moving “erratically” with a large angle between consecutive directions, and doesn’t get reduced if the estimate is “moving smoothly”. In practice, this annealing method usually ends up in fact reducing the learning rate continuously until it reaches the preset minimum, which usually happens at around 500 iterations (500 reductions). I.e. the angle is never actually small, so the stopping condition is essentially a maximum number of iterations, with the updates being of smaller and smaller magnitude due to the lrate decreasing.<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:black"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:black">Amica only reduces the lrate if the likelihood decreases. In theory, with a reasonable optimum, an optimization algorithm should be able to converge without reducing the learning rate. The convergence is measured by the weight change (the nd in the amica output) independently of the lrate. That is, the weight change should theoretically decrease to zero with a constant (sufficiently small) lrate—the higher the better since higher lrate means faster convergence. A potential issue with the runica Infomax is early convergence if you are starting far from the optimum. Fortunately the optimum is usually not far from the “sphering” starting point, so 500 iterations is usually enough to converge (even with decreasing lrate).<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:black"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:black">So in Amica, the convergence is judged by the “nd”, not the lrate. The lrate should be ideally be 0.5 or 1.0, and the LL should be increasing, and the nd should be decreasing to zero.<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:black"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:black">Hope that is helpful.<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:black"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:black">Best,<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:black">Jason<u></u><u></u></span></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:black"><u></u> <u></u></span></p><p class="MsoNormal"><span style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:black"><u></u> <u></u></span></p><p class="MsoNormal"><b><span style="font-size:10.0pt;font-family:"Tahoma","sans-serif"">From:</span></b><span style="font-size:10.0pt;font-family:"Tahoma","sans-serif""> Kevin Tan [mailto:<a href="mailto:kevintan@cmu.edu" target="_blank">kevintan@cmu.edu</a>] <br><b>Sent:</b> Monday, August 17, 2015 2:31 PM<br><b>To:</b> <a href="mailto:jason@sccn.ucsd.edu" target="_blank">jason@sccn.ucsd.edu</a>; EEGLAB List<br><b>Subject:</b> AMICA lrate gets stuck<u></u><u></u></span></p><div><div class="h5"><p class="MsoNormal"><u></u> <u></u></p><div><div><p class="MsoNormal">Hi Dr. Palmer & EEGLAB list, <u></u><u></u></p></div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">I'm trying out AMICA for artifact rejection and DIPFIT. In my tests, the lrate consistently gets stuck at 0.5, stopping only due to max iteration limit. This does not happen with extended Infomax. <u></u><u></u></p></div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">This happens whether I use the cluster (128 threads) or a normal PC (4 threads). I run AMICA 'locally' as it's called within a matlab script already run via PBS, not sure if that makes a difference. <u></u><u></u></p></div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">Here's the AMICA test stream:<u></u><u></u></p></div><div><p class="MsoNormal">- PREP pipeline<u></u><u></u></p></div><div><p class="MsoNormal">- Remove PREP-interpolated channels<u></u><u></u></p></div><div><p class="MsoNormal">- Remove 1 additional channel for rank consistency<u></u><u></u></p></div><div><p class="MsoNormal">- 1hz FIR hi-pass<u></u><u></u></p></div><div><p class="MsoNormal">- Epoch -500 to 1000ms no baseline correction<u></u><u></u></p></div><div><p class="MsoNormal">- Epoch rejection<u></u><u></u></p></div><div><p class="MsoNormal">- AMICA (using EEG(:,:) -- is it ok to concatenate epochs like this?)<u></u><u></u></p></div><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">Here's the output log (using the cluster):<u></u><u></u></p></div><p class="MsoNormal"><a href="https://cmu.box.com/s/t7j3shmwjj1wj8to80au8mdm6b5676rh" target="_blank">https://cmu.box.com/s/t7j3shmwjj1wj8to80au8mdm6b5676rh</a><u></u><u></u></p><div><p class="MsoNormal"><u></u> <u></u></p></div><div><p class="MsoNormal">Many thanks, <u></u><u></u></p></div><div><p class="MsoNormal">Kevin<u></u><u></u></p></div><div><div><div><div><div><p class="MsoNormal"><span style="font-size:7.5pt;font-family:"Arial","sans-serif"">--</span><u></u><u></u></p></div><p class="MsoNormal"><span style="font-size:7.5pt;font-family:"Arial","sans-serif"">Kevin Alastair M. Tan</span><u></u><u></u></p><div><p class="MsoNormal"><span style="font-size:7.5pt;font-family:"Arial","sans-serif"">Lab Manager/Research Assistant</span><u></u><u></u></p><div><p class="MsoNormal"><span style="font-size:7.5pt;font-family:"Arial","sans-serif"">Department of Psychology & Center for the Neural Basis of Cognition</span><u></u><u></u></p></div><div><p class="MsoNormal"><span style="font-size:7.5pt;font-family:"Arial","sans-serif"">Carnegie Mellon University</span><u></u><u></u></p></div><div><p class="MsoNormal"><u></u> <u></u></p><div><div><p class="MsoNormal"><span style="font-size:7.5pt;font-family:"Arial","sans-serif""><a href="https://www.google.com/maps/place/40%C2%B026%2729.5%22N+79%C2%B056%2744.0%22W/@40.4414869,-79.9455701,61m/data=!3m1!1e3!4m2!3m1!1s0x0:0x0" target="_blank">Baker Hall 434</a> | <a href="mailto:kevintan@cmu.edu" target="_blank">kevintan@cmu.edu</a> | <a href="http://tarrlabwiki.cnbc.cmu.edu/index.php/KevinTan" target="_blank">tarrlab.org/kevintan</a></span><u></u><u></u></p></div></div></div></div></div></div></div></div></div></div></div></div></div></blockquote></div><br></div>