[Eeglablist] eeg data statistic analysis
Евгений Машеров
emasherov at yandex.ru
Fri Dec 26 23:59:44 PST 2025
Dear Antinella/
(This is purely my personal opinion).
I used the mean frequency as a weighted average frequency, where the weights represent the power at the corresponding frequencies in the spectrum. I didn't use logarithms. Perhaps this is also a meaningful indicator, but a different one. It's more likely to be a geometric mean frequency.
We also calculated the standard deviation and considered it an independent signal parameter, complementing the mean frequency. The greater the diversity of the spectral composition, the larger the standard deviation.
The calculation was done for the entire EEG spectrum, not for individual bands. Calculating for specific bands can lead to a paradoxical situation. For example, if the alpha rhythm is defined, as is commonly accepted, as 8-13 Hz, but in a patient, while exhibiting all the properties of the alpha rhythm (localization in the occipital region, sinusoidal shape, response to eye opening), it has slowed to 7 Hz. Instead of a slowing of the alpha rhythm, we get "acceleration of the theta rhythm." And an alpha rhythm accelerated to 15 Hz (which can occur, for example, during hyperbaric oxygenation) will be interpreted as "slowing of the beta rhythm." However, I have seen programs that calculate average frequencies by range, and I believe their authors have reasons in favor of such calculations. It seems to me that calculating not the entire spectrum, but a narrow range, makes sense only if the range is defined by the researcher based on meaningful considerations, and not by standard partitions.
If you're using the Friedman test, normality testing isn't necessary; it's nonparametric. Normality testing is crucial for classical ANOVA.
However, checking for gross errors isn't a bad idea with nonparametric methods either.
Perhaps an example of applying this calculation would be of interest. I'll try to attach the text of the article (in Russian, abstract in English) to my response, but if it doesn't work, I can send it to the address you provided. I can also try to translate it if you find something interesting.
You truly
Eugen Masherov
> Dear EEGlab community,
> I am writing in the hope that you can help me clarify my ideas about the steps preceding or following the extraction of EEG data characteristics. Data collected from a 60-channel headset was divided into 2-second epochs, and I obtained as output the calculation of the average frequency of each individual stimulus per subject. The goal would be to understand how each modality (visual, auditory, or audiovisual) in which the stimulus is presented affects the average alpha, beta, and theta frequency bands of all subjects.
> Now, please excuse me, but I am neither an engineer nor a neuroscientist; I am working with EEG data as a designer.
> So, before proceeding with the statistical analysis of the data (ANOVA repeated measures (Friedman)), I would like to understand what steps I should take and when. Should the log10 function always be applied, even if I assume a non-normal distribution of the data? (Do you have any reference studies?) Should the standard deviation be applied? But when should the normality distribution check be done, if it needs to be done?
> Do you know of any similar studies that follow the same objectives that I could use as examples?
>
> I hope you can help me in some way!
> I am glad if you have managed to follow me this far!
>
> Thank you.
> Best regards,
> Antonella
> _______________________________________________
> To unsubscribe, send an empty email to eeglablist-unsubscribe at sccn.ucsd.edu or visit https://sccn.ucsd.edu/mailman/listinfo/eeglablist .
More information about the eeglablist
mailing list