[Eeglablist] Installing CUDAICA on Windows 10 (2022 update)
Makoto Miyakoshi
mmiyakoshi at ucsd.edu
Mon May 9 10:50:38 PDT 2022
Dear CUDAICA enthusiasts,
One of my collaborators, Ernie Pedapati shared this nice report with me.
With his permission, let me share it with you.
https://urldefense.proofpoint.com/v2/url?u=https-3A__lonelyneuron.substack.com_p_install-2Dcudaica-2Dfor-2Dwindows-2D10-3Fs-3Dw&d=DwIFaQ&c=-35OiAkTchMrZOngvJPOeA&r=kB5f6DjXkuOQpM1bq5OFA9kKiQyNm1p6x6e36h3EglE&m=zV9A8mGNf6-MddgKKL8q_jI1qFBSLE9cM67fRkfE1MJwi8ndb4DEmiEazoHK5Kjw&s=biWvSmegeW1fqPCUCEG6k7ARtpbJo_zgjMbrf_DTp3k&e=
(1) He suggests there seem only three mkl files are required:
mkl_core.2.dll, mkl_def.2.dll, and mkl_intel_thread.2.dll
(2) He shows how to generate the MKL files directly and add them to the
plugin folder.
(3) He shows all the tools needed and how the CUDAICA code is modified.
This way, several hundreds of megabytes of disk space and also your effort
to managing the environmental paths are saved
Makoto
On Tue, Apr 26, 2022 at 11:33 AM Makoto Miyakoshi <mmiyakoshi at ucsd.edu>
wrote:
> Dear Yunhui,
>
> Thank you for the update. I confirmed on your Github site that several
> files are updated 3 days ago.
> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_CloudyDory_cudaica-5Fwin&d=DwIFaQ&c=-35OiAkTchMrZOngvJPOeA&r=kB5f6DjXkuOQpM1bq5OFA9kKiQyNm1p6x6e36h3EglE&m=zV9A8mGNf6-MddgKKL8q_jI1qFBSLE9cM67fRkfE1MJwi8ndb4DEmiEazoHK5Kjw&s=2BGXtm18geJHlgrCrECAQPRP1RFei1oJmoIkSdWnC2U&e=
>
> > Microsoft Visual Studio is NOT needed if you use the pre-built binary.
>
> This kind of clarification is really helpful for non-technical users. I
> appreciate your effort and contribution to the community!
>
> Makoto
>
> On Sat, Apr 23, 2022 at 7:16 PM 周云晖 <yhzhou17 at fudan.edu.cn> wrote:
>
>> Dear colleagues,
>>
>> Due to a new round of COVID lockdown I finally have some time to update
>> CUDAICA for Windows. It is now compatitable with the new Intel OneAPI base
>> toolkit now. The old Intel MKL 2020 is still supported. I have actually
>> compiled two binary exe files (for the old and new Intel MKL), and
>> "cudaica.m" will automatically select which exe file to run depending on
>> your installed MKL version. No procedure in actual ICA calculation is
>> changed.
>>
>> Since there are two exe files now, the installation process is
>> simplified. Modifying "icadefs.m" is no longer needed. Selecting the
>> correct exe file is now automatically done in "cudaica.m".
>>
>> Best,
>>
>> Yunhui Zhou
>>
>> > -----原始邮件-----
>> > 发件人: "Makoto Miyakoshi via eeglablist" <eeglablist at sccn.ucsd.edu>
>> > 发送时间: 2021-11-21 01:56:10 (星期日)
>> > 收件人: "eeglablist at sccn.ucsd.edu" <eeglablist at sccn.ucsd.edu>
>> > 抄送:
>> > 主题: Re: [Eeglablist] Installing CUDAICA on Windows 10 (2021 update)
>> >
>> > Update--
>> > John Kiat (UC Davis) emailed me who happened to be working on CUDAICA
>> > installation independently in the same period (what a coincidence).
>> > He uploaded a Matlab movie of running CUDAICA to Youtube. If you want to
>> > feel the speed, check it out.
>> >
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.youtube.com_watch-3Fv-3DeVgpmvQ9LVU&d=DwIFaQ&c=-35OiAkTchMrZOngvJPOeA&r=kB5f6DjXkuOQpM1bq5OFA9kKiQyNm1p6x6e36h3EglE&m=hNn0oeQyLnH7lF_0qTcOEiiBrn-SwURSUdUvP2HKoG0v_UCZ2wi7HXTDonUb6kg2&s=1KcsNL2LFlPcNNgBNjCVSI6zinY1R1VvAjoE4ylIJs4&e=
>> > John told me that in his environment (Ryzen 7 3800x, GTX1080ti) he
>> > confirmed x15 boost even after addressing the 'drawnow' slowing issue.
>> That
>> > is a huge difference.
>> > I updated my Wiki section with these link and info.
>> >
>> https://sccn.ucsd.edu/wiki/Makoto%27s_useful_EEGLAB_code#By_using_CUDAICA_.2811.2F20.2F2021_updated:_Thank_you_Ugo.2C_Yunhui.2C_and_John.21.29
>> >
>> > Makoto
>> >
>> > On Thu, Nov 18, 2021 at 3:48 PM Scott Makeig <smakeig at gmail.com> wrote:
>> >
>> > > Makoto and all -
>> > >
>> > > I was writing to the case where the data was to be compressed into a
>> much
>> > > smaller number of dimensions than channels. If the data are
>> rank-deficient,
>> > > reducing the dimension of the data to its true rank using PCA is quite
>> > > acceptable (and necessary to allow ICA decomposition). Sorry if I
>> > > misunderstood the case here. The Artoni paper shows that for the
>> (typical)
>> > > EEG study he was analyzing, removing further dimensions not only
>> reduced
>> > > the number of interpretable ICs but reduced their 'dipolarity', i.e.
>> the
>> > > degree to which they were compatible with a scalp projection from a
>> > > localized cortical source area. I assume this might also reduce their
>> > > independence (i.e., increase the pairwise mutual information of their
>> time
>> > > courses, although I don't think we checked this).
>> > >
>> > > Example 1: I record 128 channels, then decide that the signals from
>> 17
>> > > channels are 'bad'. I then decide to linearly interpolate signals for
>> those
>> > > 17. I then convert the recomposed to average reference. The data rank
>> > > should then be 128-17-1 = 110, since linear interpolation does not add
>> > > *independent information* to the data - and converting to average
>> reference
>> > > loses a further dimension. Applying PCA reduction to 110 dimensions
>> as a
>> > > precursor to ICA is necessary and correct here.
>> > >
>> > > Example 2: I record 128 channels but want to do a 'quick' ICA
>> > > decomposition of dimension 64. So I reduce the data to the largest 64
>> PCs
>> > > and then perform ICA decomposition. Though the data volume (RMS)
>> accounted
>> > > for by the 64 removed dimensions is very likely small, noticeable
>> > > degradation of the 'brain' ICs results. This is because the PCs 'cut
>> > > across' all the IC effective sources - and the remaining 64
>> dimensions sum
>> > > in part the activities of all 128 IC effective source processes, not
>> > > allowing the derived ICs to be truly independent of each other or
>> aligned
>> > > with only one effective source.
>> > >
>> > > Scott
>> > >
>> > > On Thu, Nov 18, 2021 at 6:27 PM Makoto Miyakoshi via eeglablist <
>> > > eeglablist at sccn.ucsd.edu> wrote:
>> > >
>> > >> Dear Scott,
>> > >>
>> > >> Apart from the value of the study, I don't like the side effect
>> Fiorenzo's
>> > >> PCA paper caused: It made non-engineers superstitious about the use
>> of PCA
>> > >> (and now you push this fear campaign.)
>> > >>
>> > >> In the Artoni paper in question, at the very first line of the result
>> > >> section he reported that there were only 8+/-2.5 PCs were left to
>> obtain
>> > >> 95% of variance out of 71 scalp electrodes. You can easily imagine
>> what
>> > >> happens when you reject 63/71 PCs before ICA. In this sense, the
>> > >> conclusion
>> > >> of this study is 'duh' to me (is it not?)
>> > >>
>> > >> 'Applying PCA before ICA is suboptimal' is a qualitative statement.
>> But
>> > >> what if you reject only one or two PCs just to make the data
>> full-ranked?
>> > >> These questions can be answered only by performing numerical and
>> > >> simulation
>> > >> studies. We should educate people to reject qualitative statements
>> and
>> > >> instead think quantitatively.
>> > >>
>> > >> Dear John,
>> > >>
>> > >> It's not about CUDAICA per se, but if you care the obtimality of the
>> ICA
>> > >> results, I recommend you compare results from channel rejection
>> against
>> > >> results from PCA dimension reduction so that you obtain 70 ICs in
>> both
>> > >> results. Compare the results side by side and determine both
>> visually and
>> > >> quantitatively using ICLabel. Most likely, they do not show much
>> > >> difference. Then you feel better to go with PCA because so that you
>> don't
>> > >> need to lose scalp electrodes.
>> > >>
>> > >> Makoto
>> > >>
>> > >>
>> > >>
>> > >> On Wed, Nov 17, 2021 at 1:22 PM Scott Makeig <smakeig at gmail.com>
>> wrote:
>> > >>
>> > >> > John,
>> > >> >
>> > >> > Makoto seems to forget the result of Fiorenzo Artoni that
>> applying PCA
>> > >> > before ICA is suboptimal - better to reduce the number of channels
>> and
>> > >> run
>> > >> > ICA decomposition full-rank. Or, if you are more ambitious /
>> exacting,
>> > >> run
>> > >> > multiple ICA decompositions on different channel subsets (for
>> example,
>> > >> > random sets of 70 channels picked from 128) and then apply
>> clustering to
>> > >> > the resulting independent component (IC) maps - I haven't seen that
>> > >> > approach applied yet ...
>> > >> >
>> > >> > Artoni, F., Delorme, A. and Makeig, S., 2018. Applying dimension
>> > >> > reduction to EEG data by Principal Component Analysis reduces the
>> > >> quality
>> > >> > of its subsequent Independent Component decomposition
>> > >> > <
>> > >>
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.sciencedirect.com_science_article_pii_S1053811918302143-3Fcasa-5Ftoken-3DKT3XImh-2D-2Dl0AAAAA-3Aut0ozF7mVGYDngMVu-2Di0PowjzqzZEhuIl153z6MNgM8NRHDXZZj2CWlYEd0948glBn11q-5FXk7B8&d=DwMFaQ&c=-35OiAkTchMrZOngvJPOeA&r=pyiMpJA6aQ3IKcfd-jIW1kWlr8b1b2ssGmoavJHHJ7Q&m=YlyRvbJOBqbcxuJuFRpFCdKF7Tkg1Qj32rXx8Aa2570eE8IWixleWy79aWfkbQP9&s=n4PV6FRx_PCHi9cIjG7_rLRcb9j62ChP5inSC0zrMnE&e=
>> > >> >
>> > >> > . *NeuroImage*, *175*, pp.176-187.
>> > >> >
>> > >> > Fiorenzo's RELICA plug-in does something related - applying
>> (full-rank)
>> > >> > ICA decomposition to randomly selected subsets of data points,
>> followed
>> > >> by
>> > >> > component clustering. Zeynep Akalin Acar has recently demonstrated
>> that
>> > >> > using RELICA component cluster scalp map means *and* variances can
>> > >> > increase the accuracy of high-resolution source location
>> estimation.
>> > >> >
>> > >> > Acar, Z.A. and Makeig, S., 2020, October. Improved cortical source
>> > >> > localization of ICA-derived EEG components using a source scalp
>> > >> projection
>> > >> > noise model
>> > >> > <
>> > >>
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__ieeexplore.ieee.org_iel7_9287816_9287978_09288020.pdf-3Fcasa-5Ftoken-3DdOEYtrVfetoAAAAA-3AQsY-2D7AQl9TzSyk3IMqlsy7KnMhUI-2DJ-2DQ68H5cKzjpbPWRVpN-2D4xrR-5FJPMBwDEDF-5FQ9nLkIHLH5U&d=DwMFaQ&c=-35OiAkTchMrZOngvJPOeA&r=pyiMpJA6aQ3IKcfd-jIW1kWlr8b1b2ssGmoavJHHJ7Q&m=YlyRvbJOBqbcxuJuFRpFCdKF7Tkg1Qj32rXx8Aa2570eE8IWixleWy79aWfkbQP9&s=Y8l5g2Z8tUOx5dUtb8pXqMm0HnvqsCYHwlarrNfdP_c&e=
>> > >> >.
>> > >> > In *2020 IEEE 20th International Conference on Bioinformatics and
>> > >> > Bioengineering (BIBE)* (pp. 543-547). IEEE.
>> > >> >
>> > >> > Scott
>> > >> >
>> > >> > On Wed, Nov 17, 2021 at 3:24 PM Makoto Miyakoshi via eeglablist <
>> > >> > eeglablist at sccn.ucsd.edu> wrote:
>> > >> >
>> > >> >> Dear John,
>> > >> >>
>> > >> >> I found it interesting that in your case runica()'s processing
>> time
>> > >> >> linearly increased (63 -> 168 min) as the input data length
>> increased
>> > >> (8
>> > >> >> ->
>> > >> >> 25 min), but that for CUDAICA did not (2.3 -> 2.8 min).
>> > >> >>
>> > >> >> If you have 126 ch, you want to have 126^2*30 = 476280 data
>> points as a
>> > >> >> minimum (from SCCN's never-verified rule of thumb). But you have
>> > >> 275*474=
>> > >> >> 130350 datapoints, which seems suboptimal to ensure a good
>> learning.
>> > >> >> Perhaps you want to apply dimension reduction using PCA to obtain
>> 70
>> > >> ICs,
>> > >> >> so that the same rule of thumb predicts 70^2*30 = 147000
>> datapoints for
>> > >> >> learning, which is much closer.
>> > >> >>
>> > >> >> Do you want to know more detail about this optimization?
>> > >> >> In fact, without running a simulation you can't theoretically
>> determine
>> > >> >> what number is a good number. This is why I wrote this simulator
>> as an
>> > >> >> EEGLAB plugin. Try it out to 'feel' how much deviation/violation
>> from
>> > >> the
>> > >> >> 'rule of thumb' can negatively impact the decomposition.
>> > >> >>
>> > >> >>
>> > >>
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__www.youtube.com_watch-3Fv-3DCGOw04Ukqws&d=DwIFaQ&c=-35OiAkTchMrZOngvJPOeA&r=kB5f6DjXkuOQpM1bq5OFA9kKiQyNm1p6x6e36h3EglE&m=YfSSJdGbaWUJjsVGL_Bd3kbvoWDALgCGOA44Hn93INujbnQT8WNcijz3CAaY7Km2&s=HuuZdy7O3viOY-fIz_ayjeDkatQ_038Fa2wbfMDeg9I&e=
>> > >> >>
>> > >> >> Makoto
>> > >> >>
>> > >> >> On Tue, Nov 16, 2021 at 2:59 PM Richards, John <
>> > >> RICHARDS at mailbox.sc.edu>
>> > >> >> wrote:
>> > >> >>
>> > >> >> > I was curious about the speed differences for my applications.
>> I
>> > >> have
>> > >> >> > tested this before, did not write down my results.
>> > >> >> >
>> > >> >> > I ran an EEGlab file, 126 channels * 275 samples * 474 trials,
>> about
>> > >> 8
>> > >> >> min
>> > >> >> > of EEG data. This is done on a linux node in a a linux cluster,
>> > >> >> Intel(R)
>> > >> >> > Xeon(R) CPU E5-2680 v4 @ 2.40GHz node, with 256G memory, 28
>> cores.
>> > >> The
>> > >> >> > runica appears to be working on 12 cores. The gpu was a dual
>> Tesla
>> > >> >> > P100-PCIE-16G. The cudaica ran on one GPU.
>> > >> >> >
>> > >> >> > runica version took 63 min. cudaica version took 2 min 15 s;
>> > >> >> > runica appeared to be running on multiple CPUs, ~ 12 CPUs.
>> > >> >> >
>> > >> >> > I concatenated the data for 1422 trials, about 25 min
>> > >> >> > Cudica took 2 min 50s|
>> > >> >> > runica took 2.8 hours, 12 CPUs
>> > >> >> >
>> > >> >> > Most of our runs with infants take 8 to 10 min, some of our
>> adults
>> > >> runs
>> > >> >> > are 25 min.
>> > >> >> >
>> > >> >> > I understand from the earlier conversation the binica might be
>> able
>> > >> to
>> > >> >> > match these results? I'm not going to do a full test, but this
>> > >> >> convinces
>> > >> >> > me to stick with cudaica for now.
>> > >> >> >
>> > >> >> > John
>> > >> >> >
>> > >> >> >
>> > >> >> > -----Original Message-----
>> > >> >> > From: Richards, John
>> > >> >> > Sent: Thursday, November 11, 2021 1:15 AM
>> > >> >> > To: Makoto Miyakoshi <mmiyakoshi at ucsd.edu>; ugob at siu.edu;
>> > >> >> > eeglablist at sccn.ucsd.edu
>> > >> >> > Subject: RE: [Eeglablist] Installing CUDAICA on Windows 10 (2021
>> > >> update)
>> > >> >> >
>> > >> >> > Re CUDAICA. I was able to install it, i don't remember it
>> being that
>> > >> >> > difficult. I had to mess around with the CUDA version.
>> > >> >> >
>> > >> >> > I have found it "blazing" fast compared to runica. I have not
>> timed
>> > >> it.
>> > >> >> > We have 10-15 min sessions with EGI 128, 250 hz, do the Prep
>> > >> pipeline to
>> > >> >> > get avg ref, and then CUDAICA. It takes < 5 min to do the Prep,
>> > >> and <
>> > >> >> 5
>> > >> >> > min to do the CUDAICA; cf 45 min to 60 min with runica. I may
>> not be
>> > >> >> using
>> > >> >> > the most recent runica. BTW, we have fairly powerful
>> computers; we
>> > >> >> use 48
>> > >> >> > cores for the Prep pipeline which is a vast speedup, and V100's
>> with
>> > >> >> 16gb
>> > >> >> > or 32gb. Definitely not bargain chips. We use the 48core
>> computers
>> > >> >> for
>> > >> >> > the runica, but it does not appear to profit from the multiple
>> CPUs.
>> > >> >> The
>> > >> >> > Prep pipeline also is very slow on single CPUs, but very fast
>> on the
>> > >> 48
>> > >> >> CPU
>> > >> >> > machines.
>> > >> >> >
>> > >> >> > I would be glad to share more details if anyone is interested.
>> > >> >> >
>> > >> >> > John
>> > >> >> >
>> > >> >> >
>> > >> >> > ***********************************************
>> > >> >> > John E. Richards
>> > >> >> > Carolina Distinguished Professor
>> > >> >> > Department of Psychology
>> > >> >> > University of South Carolina
>> > >> >> > Columbia, SC 29208
>> > >> >> > Dept Phone: 803 777 2079
>> > >> >> > Fax: 803 777 9558
>> > >> >> > Email: richards-john at sc.edu
>> > >> >> >
>> > >> >> >
>> > >> >>
>> > >>
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__jerlab.sc.edu&d=DwIFAw&c=-35OiAkTchMrZOngvJPOeA&r=pyiMpJA6aQ3IKcfd-jIW1kWlr8b1b2ssGmoavJHHJ7Q&m=XWfhosWnNSjs97eRAV2Ysofk5w2Z2_mbQvfeek3KRqTVlZ-2fBHSCo5P_bnFInes&s=yvIsDcwOpKjhTPokE_cuv5RlAl7bUeNjmpt7-e34zWk&e=
>> > >> >> > *************************************************
>> > >> >> >
>> > >> >> > -----Original Message-----
>> > >> >> > From: eeglablist <eeglablist-bounces at sccn.ucsd.edu> On Behalf
>> Of
>> > >> Makoto
>> > >> >> > Miyakoshi via eeglablist
>> > >> >> > Sent: Thursday, November 11, 2021 1:02 AM
>> > >> >> > To: EEGLAB List <eeglablist at sccn.ucsd.edu>; ugob at siu.edu
>> > >> >> > Subject: [Eeglablist] Installing CUDAICA on Windows 10 (2021
>> update)
>> > >> >> >
>> > >> >> > Dear list members,
>> > >> >> >
>> > >> >> > I summarized the steps to install cudaica() which uses GPU
>> > >> computation
>> > >> >> to
>> > >> >> > calculate infomax ICA (Raimondo et al., 2012). The result from
>> the
>> > >> speed
>> > >> >> > comparison between runica() and cudaica() was not as dramatic
>> as x25
>> > >> >> > reported by the original paper, probably because Tjerk's smart
>> hack
>> > >> >> alone
>> > >> >> > already gave x4-5 speed up to runica(). Still, using a
>> relatively
>> > >> cheap
>> > >> >> > GTX1660 (the pre-COVID price range is $250), I confirmed x4-5
>> speed
>> > >> up
>> > >> >> > compared with runica(). The detailed instruction can be found
>> in the
>> > >> >> > following link.
>> > >> >> >
>> > >> >> >
>> > >> >> >
>> > >> >>
>> > >>
>> https://sccn.ucsd.edu/wiki/Makoto%27s_useful_EEGLAB_code#By_using_CUDAICA_.2811.2F10.2F2021_added.29
>> > >> >> >
>> > >> >> > WARNING: The installation was difficult.
>> > >> >> >
>> > >> >> > Makoto
>> > >> >> > _______________________________________________
>> > >> >> > Eeglablist page: http://sccn.ucsd.edu/eeglab/eeglabmail.html
>> > >> >> > To unsubscribe, send an empty email to
>> > >> >> > eeglablist-unsubscribe at sccn.ucsd.edu
>> > >> >> > For digest mode, send an email with the subject "set digest
>> mime" to
>> > >> >> > eeglablist-request at sccn.ucsd.edu
>> > >> >> >
>> > >> >> _______________________________________________
>> > >> >> Eeglablist page: http://sccn.ucsd.edu/eeglab/eeglabmail.html
>> > >> >> To unsubscribe, send an empty email to
>> > >> >> eeglablist-unsubscribe at sccn.ucsd.edu
>> > >> >> For digest mode, send an email with the subject "set digest mime"
>> to
>> > >> >> eeglablist-request at sccn.ucsd.edu
>> > >> >>
>> > >> >
>> > >> >
>> > >> > --
>> > >> > Scott Makeig, Research Scientist and Director, Swartz Center for
>> > >> > Computational Neuroscience, Institute for Neural Computation,
>> > >> University of
>> > >> > California San Diego, La Jolla CA 92093-0559,
>> > >> http://sccn.ucsd.edu/~scott
>> > >> >
>> > >> _______________________________________________
>> > >> Eeglablist page: http://sccn.ucsd.edu/eeglab/eeglabmail.html
>> > >> To unsubscribe, send an empty email to
>> > >> eeglablist-unsubscribe at sccn.ucsd.edu
>> > >> For digest mode, send an email with the subject "set digest mime" to
>> > >> eeglablist-request at sccn.ucsd.edu
>> > >>
>> > >
>> > >
>> > > --
>> > > Scott Makeig, Research Scientist and Director, Swartz Center for
>> > > Computational Neuroscience, Institute for Neural Computation,
>> University of
>> > > California San Diego, La Jolla CA 92093-0559,
>> http://sccn.ucsd.edu/~scott
>> > >
>> > _______________________________________________
>> > Eeglablist page: http://sccn.ucsd.edu/eeglab/eeglabmail.html
>> > To unsubscribe, send an empty email to
>> eeglablist-unsubscribe at sccn.ucsd.edu
>> > For digest mode, send an email with the subject "set digest mime" to
>> eeglablist-request at sccn.ucsd.edu
>>
>>
>>
>>
>>
>>
More information about the eeglablist
mailing list