[Eeglablist] Forcing Double Precision in Makotos Preprocessing Pipeline
mmiyakoshi at ucsd.edu
Tue Jun 22 16:57:59 PDT 2021
Sorry for the late reply.
I'm not entirely sure about the EEGLAB's behavior, particularly the
consistency of being single or double.
Last week we had the 30th EEGLAB workshop and I used the latest version of
EEGLAB (v.2021). I noticed that single/double option is deleted from the
menu item. This means that now the assumption is the data is always in
single precision. If after some process, EEG.data becomes double, that is a
bug, or at least and unwanted status for the developer. If you are so kind,
you may report it to EEGLAB github issue report.
Thank you Malte!
On Tue, May 18, 2021 at 9:10 AM Malte Anders <malteanders at gmail.com> wrote:
> Dear Makoto,
> first things first: it's always a pleasure hearing from you, and also
> discussing stuff with you. So thank you for taking your time to reply.
> I read and understood everything you've written. I hate to break it to
> you, but I found the culprit: it's ASR (or: Tools -> Reject data using
> Clean Rawdata and ASR). See this screenshot:
> I have forced double set precision under "preferences" and simply set max
> acceptable 0.5 second window std dev to 20 (I unchecked every other
> option). After some calculations, data set size doubles, EEG.data is also
> converted to double (it was single before, also during filtering etc.) and
> the .set file after that is just sitting there, wasting space, saving EEG
> data as double and not as single.
> Is this a bug or a feature?
> Am Di., 18. Mai 2021 um 02:27 Uhr schrieb Makoto Miyakoshi via eeglablist <
> eeglablist at sccn.ucsd.edu>:
>> Dear Malte,
>> This is a follow up.
>> One of my colleagues kindly told me that your description on the following
>> part is not the case, which I agree.
>> > If my simple thinking is correct, this only wastes space on the hard
>> drive (in my case, approx. 50 Gb).
>> The reality is, regardless of the selection for the single or double
>> precision on the EEGLAB option, the EEGLAB-saved data are all in single
>> precision. There are multiple SCCN colleagues who mentioned this to me in
>> the past.
>> > The .set file is then also double the size (in this case, 1 Gb).
>> I thought you were talking about the loaded data on the RAM, which is
>> doubled if you choose 'double' option.
>> But if you mean the .set file (for an old schooler like me, .fdt file)
>> saved on HDD, it should not double the size.
>> Let me cite my colleague's report below. If you do not agree, please
>> confirm/try the following things.
>> > For the current github version of eeglab (and commits back 10 years or
>> more), on line 223 of pop_saveset, the logic ensures that it converts the
>> data to be saved to disk as single if they are not single.
>> > I tested this by changing line 92 of sigprocfunc/floatwrite.m to
>> "fwrite(fid,A,class(A));" and it saves datasets as double size of the
>> original size when they are converted to double from single. This breaks
>> other things, but at least proves my point.
>> To conclude, remember this--EEGLAB's design philosophy is that data should
>> be in single precision all the time including saving/loading, except when
>> matrix operation is necessary.
>> On Thu, May 13, 2021 at 1:11 PM Malte Anders via eeglablist <
>> eeglablist at sccn.ucsd.edu> wrote:
>> > Dear list,
>> > Makoto states in his preprocessing pipeline that one should force
>> EEGLAB to
>> > use double set precision. I am wondering if this is a smart choice?
>> > I am importing EEG files with a 24 bit resolution (I hope this is even
>> > relevant info). I noticed that a 1hr EEG file with fs=512 Hz in the
>> > manufacturers .hdf5 format is roughly 500 Mb and stays approximately
>> > size when importing it into EEGLAB with single set precision (the .set
>> > is also ~500 mb). Even when forcing double set precision, EEG.data is
>> > stored as a single after importing.
>> > Only after performing an EEGLAB operation such as filtering or ASR,
>> > EEG.data is then converted to "double". The .set file is then also
>> > the size (in this case, 1 Gb). In my opinion, this creates information
>> > of thin air, as when the original file was 32 bit (or 24 bit in this
>> > filtering does not magically add more information to it that needs
>> > the disk space.
>> > On top of that, it is mentioned in quite a few places that if double
>> > precision is necessary for operations such as ICA, this is done
>> > automatically in the process.
>> > Thus, why should I perform the very first step in Makotos preprocessing
>> > pipeline and change the options to force "double precision"? If my
>> > thinking is correct, this only wastes space on the hard drive (in my
>> > approx. 50 Gb). Are there any specific steps that really need this
>> > checkbox? On top of that, the checkbox is even hidden in EEGLAB 2021 by
>> > default so you really have to want to force EEGLAB to double set
>> > precision...
>> > Best wishes!
>> > Malte
>> > _______________________________________________
>> > Eeglablist page: http://sccn.ucsd.edu/eeglab/eeglabmail.html
>> > To unsubscribe, send an empty email to
>> > eeglablist-unsubscribe at sccn.ucsd.edu
>> > For digest mode, send an email with the subject "set digest mime" to
>> > eeglablist-request at sccn.ucsd.edu
>> Eeglablist page: http://sccn.ucsd.edu/eeglab/eeglabmail.html
>> To unsubscribe, send an empty email to
>> eeglablist-unsubscribe at sccn.ucsd.edu
>> For digest mode, send an email with the subject "set digest mime" to
>> eeglablist-request at sccn.ucsd.edu
> Mit freundlichen Grüßen,
> Malte Anders
More information about the eeglablist