[Eeglablist] Forcing Double Precision in Makotos Preprocessing Pipeline

Makoto Miyakoshi mmiyakoshi at ucsd.edu
Mon May 17 17:16:17 PDT 2021

Dear Malte,

This is a follow up.
One of my colleagues kindly told me that your description on the following
part is not the case, which I agree.

> If my simple thinking is correct, this only wastes space on the hard
drive (in my case, approx. 50 Gb).

The reality is, regardless of the selection for the single or double
precision on the EEGLAB option, the EEGLAB-saved data are all in single
precision. There are multiple SCCN colleagues who mentioned this to me in
the past.

> The .set file is then also double the size (in this case, 1 Gb).

I thought you were talking about the loaded data on the RAM, which is
doubled if you choose 'double' option.
But if you mean the .set file (for an old schooler like me, .fdt file)
saved on HDD, it should not double the size.

Let me cite my colleague's report below. If you do not agree, please
confirm/try the following things.

> For the current github version of eeglab (and commits back 10 years or
more), on line 223 of pop_saveset, the logic ensures that it converts the
data to be saved to disk as single if they are not single.
> I tested this by changing line 92 of sigprocfunc/floatwrite.m to
"fwrite(fid,A,class(A));" and it saves datasets as double size of the
original size when they are converted to double from single. This breaks
other things, but at least proves my point.

To conclude, remember this--EEGLAB's design philosophy is that data should
be in single precision all the time including saving/loading, except when
matrix operation is necessary.


On Thu, May 13, 2021 at 1:11 PM Malte Anders via eeglablist <
eeglablist at sccn.ucsd.edu> wrote:

> Dear list,
> Makoto states in his preprocessing pipeline that one should force EEGLAB to
> use double set precision. I am wondering if this is a smart choice?
> I am importing EEG files with a 24 bit resolution (I hope this is even
> relevant info). I noticed that a 1hr EEG file with fs=512 Hz in the
> manufacturers .hdf5 format is roughly 500 Mb and stays approximately that
> size when importing it into EEGLAB with single set precision (the .set file
> is also ~500 mb). Even when forcing double set precision, EEG.data is still
> stored as a single after importing.
> Only after performing an EEGLAB operation such as filtering or ASR,
> EEG.data is then converted to "double". The .set file is then also double
> the size (in this case, 1 Gb). In my opinion, this creates information out
> of thin air, as when the original file was 32 bit (or 24 bit in this case),
> filtering does not magically add more information to it that needs double
> the disk space.
> On top of that, it is mentioned in quite a few places that if double
> precision is necessary for operations such as ICA, this is done
> automatically in the process.
> Thus, why should I perform the very first step in Makotos preprocessing
> pipeline and change the options to force "double precision"? If my simple
> thinking is correct, this only wastes space on the hard drive (in my case,
> approx. 50 Gb). Are there any specific steps that really need this
> checkbox? On top of that, the checkbox is even hidden in EEGLAB 2021 by
> default so you really have to want to force EEGLAB to double set
> precision...
> Best wishes!
> Malte
> _______________________________________________
> Eeglablist page: http://sccn.ucsd.edu/eeglab/eeglabmail.html
> To unsubscribe, send an empty email to
> eeglablist-unsubscribe at sccn.ucsd.edu
> For digest mode, send an email with the subject "set digest mime" to
> eeglablist-request at sccn.ucsd.edu

More information about the eeglablist mailing list