[Eeglablist] Nyquist, downsampling, and EEG
Emmanuelle Tognoli
tognoli at ccs.fau.edu
Mon Mar 4 10:26:56 PST 2019
Dear Eric,
Did you mean the "lowest reasonable sampling rate"? Nyquist theorem says
that to represent frequency Fn, you need at least twice as much sampling
rate Fs=2*Fn - this is a minimum and in practice we like to sample much
more (though we try to keep it reasonable on the other side, each time
you double the sampling rate, you add to computation time and file
storage burden). When there isn't any further specifics, I personally
use a sampling rate at 10 times the fastest frequency of interest (Ffoi)
I am interested in quantifying properly. So if my Ffoi is 100Hz, I'd
have a Fs of 1,000Hz (and therefore a Fn of 500Hz, and for good measure,
see below for clarification, an analog filter at 200Hz).
You might imagine (or draw on paper) the undulation of a sine function.
If you get only two points to represent each cycle as per the minima of
the Nyquist theorem (sample 2 evenly spaced dots per cycle, starting
from a random timepoint, and connect with with a ruler), then you have
a rough, angular, poor-looking representation of your nice and smooth
sin function (one point somewhere in the up side, one in the down side).
That bad-looking data is what goes in your digital file of an EEG.
Amplitude is quite distorted. Frequency probably also. Phase, as you
suggest, will be hard to accurately estimate. In the singular case that
you'd have the sampling rate Fs exactly at twice the Nyquist, and you'd
start sampling at a zero crossing, you'd even manage to obtain a
perfectly flat signal, as if your wave did not even undulate. So the
more you add points (sampling rate Fs) to discretely sample your cycles,
2 at a very bare minimum, maybe at least 5 or 10, the nicer looking your
data.
Further, to ensure that the discrete sampling of the signal does not
betray you (improperly showing as slow frequencies brainwaves that were
initially faster than Fn), we always apply, at recording and as an
important safety measure, an analog lowpass filter (ideally at good
distance from the Nyquist frequency Fn, say, 2 times slower than the
Nyquist frequency Fn). This is to be sure that all faster-than-Nyquist
frequencies are suppressed and therefore unable to be misrepresented.
Further digital filter might happen, the filter you talk about might be
one, perhaps aimed at suppressing line noise (50Hz Europe, 60Hz US), in
the event that the recording would carry plenty of it and you would not
have enough epochs to cancel it out.
Now, two more questions on the heels of this are:
-what is the fastest frequency that is time-locked and worthy of your
effort? This question depends on the specific of your research, and
differs whether you work in a discovery-based or confirmatory paradigm.
Gamma is certainly not a rare occurrence, you might also look in the
literature of the Auditory Brainstem Responses: those people try to
capture early stages of auditory processing, they usually have to sample
at 20,000Hz to get a good handle on those multiple waves developing in
the first 10 milliseconds after auditory event.
-how good is your syncing between the subject's brainwaves and a digital
tag reporting on the time of occurrence of the stimulus? If you have
quite a jitter there (your computer program not being too good at
telling your EEG system when the event happened), then your efforts to
catch faster time-locked components will likely not be productive, which
is why many people might settle for parameters with {low sampling rate;
low analog filter}.
I try to summarize, if you suspect some interesting stuff in the fast
frequencies, that would be time-locked to a stimulus, if you have a
really good computer program for administration of those stimuli, then
you go for a much higher sampling rate than 100Hz. If you have
restrictions on any of the above, then, you can settle at a more modest
sampling rate. 100Hz though, unless needed for other constraints, seems
quite low.
I hope this helps, with kind regards,
Emmanuelle Tognoli - Center for Complex Systems and Brain Sciences
On 3/3/2019 12:07 PM, Eric Rawls wrote:
> Hi list, I have a short, more theoretically designed question.
> Typical ERP studies will apply a filter around 50 Hz to remove
> frequencies above line noise.
> Doesn't this mean that for any data filtered this way the highest
> reasonable sampling rate is 50*2=100 Hz?
> So why does we all use 500, or even 1000 Hz, when sampling EEG signals?
> Does this lower limit on the sampling rate need to increase for phase
> based analyses etc?
> I've been curious about this for a while and wanted to open it up to a
> group of experts. Why do we sample above the Nyquist rate in our EEG
> experiments?
> Thanks for the discussion
> Eric Rawls, M.S.
> Graduate Research Assistant & Instructor
> Department of Psychological Sciences
> University of Arkansas
>
> _______________________________________________
> Eeglablist page: http://sccn.ucsd.edu/eeglab/eeglabmail.html
> To unsubscribe, send an empty email to eeglablist-unsubscribe at sccn.ucsd.edu
> For digest mode, send an email with the subject "set digest mime" to eeglablist-request at sccn.ucsd.edu
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://sccn.ucsd.edu/pipermail/eeglablist/attachments/20190304/b38a4b35/attachment.html>
More information about the eeglablist
mailing list