[Eeglablist] about runica() function

Jason Palmer japalmer at ucsd.edu
Sun Oct 15 15:30:51 PDT 2006


Dear Ary,

The Natural Gradient is the regular (stochastic) gradient postmultiplied by
W^T * W, as in the first jpeg you attached. The second jpeg is the regular
gradient. In both, y is defined so that 1-2y is the gradient of the log of
the logistic density (or sub-gaussian modification of it) source model. I'm
not sure what you mean when you say they are the same rule. The Natural
Gradient is postmultiplied by W^T*W, and the regular gradient isn't.

A note on the explanation in the first attached jpeg: getting the inverse of
W isn't really that expensive, even with hundreds of channels, in comparison
to the expense of multiplying W times the data to get the source estimates.
The benefit of using the Natural Gradient is that is has a Newton-like
effect of scaling the gradient (positive definitely) so that it has faster
convergence. For large numbers of channels, the regular gradient might take
an unreasonable amount of time to converge, not because the inverses take
time, but because the convergence is less uniform in the W space, and many
more steps are required.

Hope this is helpful.

Jason

-----Original Message-----
From: eeglablist-bounces at sccn.ucsd.edu
[mailto:eeglablist-bounces at sccn.ucsd.edu]On Behalf Of Ary Adilson Morales
Alvarado
Sent: Sunday, October 08, 2006 11:26 PM
To: eeglablist at sccn.ucsd.edu
Subject: [Eeglablist] about runica() function


Dear Friends,

I was going read about Infomax algorithm, I know that runica() function in
EEGLAB has implemented an infomax algorithm with the natural gradient of
Amari, but I was reading an article about that, and it seems that natural
gradient doesn't have implemented.

The article is "Artifact Extraction from EEG Data using Independent
Component Analysis"

Shadab Mozaffar
David W. Petr

But, they show that their learning rules are the same that you use in
runica() with natural gradient, but they use an stochastic gradient.

Can you see the pictures attaches please, They show a diferent form to have
the same learing rule, but without use natural gradient.

Thanks in advanced

Ary



LLama Gratis a cualquier PC del Mundo.
Llamadas a fijos y móviles desde 1 céntimo por minuto.
http://es.voice.yahoo.com




More information about the eeglablist mailing list