[Eeglablist] Units in output of timefreq - wavelet normalization

Makoto Miyakoshi mmiyakoshi at ucsd.edu
Fri Aug 19 11:56:50 PDT 2016


Dear Mike,

After reading the material Andreas referred to, now I'm not sure about this:

> Particularly with a constant time-domain Gaussian width, the wavelet gets
wider in the frequency domain with increasing frequency.

> it is due to the increasing width of the wavelet in the frequency domain.

This seems valid for the case of short-time Frourier Transform, for
example. However, in our example code, the wavelet being used changes
Gaussian width, right? Then the 'Heisenberg box' changes its shape
accordingly, and you can't say that the wavelet lets through more signal
because narrower time width... am I wrong? What do you mean by this example?

Makoto

On Wed, Aug 17, 2016 at 11:05 AM, Mike X Cohen <mikexcohen at gmail.com> wrote:

> Hi everyone. I agree with Andreas that normalization is a tricky issue
> and, to some extent, a philosophical one. In general, I recommend against
> any interpretation of "absolute" values, because (1) they depend on a
> variety of uninteresting factors like electrode montage, equipment, filter
> characteristics, and so on, (2) they are entirely incomparable across
> methods. You can compare dB or % change between EEG, MEG, and LFP, but it
> is impossible to compare EEG microvolts with LFP microvolts, MEG teslas,
> change in light fluorescence, etc.
>
> I point this out because I think we have here mainly an academic
> discussion for the vast majority of neuroscience research, particularly for
> any neuroscience researchers that hope to link their findings to other
> pockets of neuroscience regardless of montage, species, decade, etc. That
> said, if there's one thing academics love, it's an academic discussion ;)
> so here are my two cents (the Dutch don't use pennies, so you'll have to
> decide whether to round down to zero or up to .05 euros).
>
> From Andreas' code, you can add the following two lines after "signal,"
> which will make a new signal, a chirp. You can then add colorbars to both
> TF plots to see that the power is accurately reconstructed after max-val
> normalization. The two numbers in variable f are for the start and end
> frequencies of the linear chirp.
>
> f=[25 60];
> signal = sin(2*pi.*linspace(f(1),f(2)*mean(f)/f(2),length(t)).*t)';
>
> The next point concerned the increase in power over frequency. This is a
> feature, not a bug. First of all, it is highly dependent on the number of
> cycles. For example, note that the power in the top-middle plot goes up to
> just over .2. Now change the 'cycles' parameter to 30; the power now goes
> up to around .05. In other words, the horrible linear increase was cut to a
> quarter. A constant number of cycles over a large range of frequencies is a
> poor choice of parameter, and it should come as no surprise that poor
> parameter choices lead to poor results.
>
> So why does this even happen? Particularly with a constant time-domain
> Gaussian width, the wavelet gets wider in the frequency domain with
> increasing frequency. This means that more of the signal is being let
> through the filter. More signal = more power. I do not see how this is an
> artifact, or even a problem. The more of the spectrum you look at, the more
> power you will see. If you want to maximize the power, then use the entire
> spectrum. In fact, total FFT power is the same as total time-domain power,
> so the most power you can get from the FFT will be sum(signal.^2), which is
> a lot more than what you'd get from any wavelet.
>
> In other words, the increase in power with increasing frequency is *not*
> due to increasing frequency; it is due to the increasing width of the
> wavelet in the frequency domain. This seems worse for white noise because
> of the flat spectrum, but it will be less noticeable for real brain
> signals, which have 1/f^c shape (whether EEG is broadband and noisy depends
> very much on the characteristics of the signal one is investigating). And
> again, this also depends on the wavelet width parameter.
>
> I'll conclude by reiterating that interpreting any "absolute" voltage
> value should be avoided whenever possible. Of course, there is always the
> occasional exception, but I think we can all agree that we should focus
> more on effect sizes rather than on arbitrary values. Some kind of baseline
> normalization is almost always best, and really the best way to make sure
> your findings can be compared across the growing span of brain imaging
> techniques in neuroscience.
>
> Mike
>
> --
> Mike X Cohen, PhD
> mikexcohen.com
>
> _______________________________________________
> Eeglablist page: http://sccn.ucsd.edu/eeglab/eeglabmail.html
> To unsubscribe, send an empty email to eeglablist-unsubscribe at sccn.
> ucsd.edu
> For digest mode, send an email with the subject "set digest mime" to
> eeglablist-request at sccn.ucsd.edu
>



-- 
Makoto Miyakoshi
Swartz Center for Computational Neuroscience
Institute for Neural Computation, University of California San Diego
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://sccn.ucsd.edu/pipermail/eeglablist/attachments/20160819/09e2bd59/attachment.html>


More information about the eeglablist mailing list