[Eeglablist] The KARA ONE Database: Phonological Categories in imagined and articulated speech

Makoto Miyakoshi mmiyakoshi at ucsd.edu
Wed Aug 5 10:16:29 PDT 2015


Dear Frank,

Very nice! I've often seen requests (many from India) to share EEG data. I
believe your effort to share data in that way will facilitate the research
activity in the (mainly engineering) field.

Makoto

On Mon, Aug 3, 2015 at 6:32 AM, frank <frank at cs.toronto.edu> wrote:

> We are making 24 GB of a new dataset, called Kara One, freely available.
> This database combines 3 modalities (EEG, face tracking, and audio) during
> imagined and articulated speech using phonologically-relevant phonemic and
> single-word prompts. It is the result of a collaboration between the
> Toronto Rehabilitation Institute (in the University Health Network) and the
> Department of Computer Science at the University of Toronto.
>
> In the associated paper (abstract below), we show how to accurately
> classify imagined phonological categories solely from EEG data.
> Specifically, we obtain up to 90% accuracy in classifying imagined
> consonants from imagined vowels and up to 95% accuracy in classifying
> stimulus from active imagination states using advanced deep-belief networks.
>
> Data from 14 participants are available here:
> http://www.cs.toronto.edu/~complingweb/data/karaOne/karaOne.html.
>
> If you have any questions, please contact Frank Rudzicz at
> frank at cs.toronto.edu.
>
>
>
> Best regards,
>
> Frank
>
>
>
> PAPER Shunan Zhao and Frank Rudzicz (2015) Classifying phonological
> categories in imagined and articulated speech. *In Proceedings of ICASSP
> 2015*, Brisbane Australia
>
> ABSTRACT This paper presents a new dataset combining 3 modalities (EEG,
> facial, and audio) during imagined and vocalized phonemic and single-word
> prompts. We pre-process the EEG data, compute features for all 3
> modalities, and perform binary classification of phonological categories
> using a combination of these modalities. For example, a deep-belief network
> obtains accuracies over 90% on identifying consonants, which is
> significantly more accurate than two baseline support vector machines. We
> also classify between the different states (resting, stimuli, active
> thinking) of the recording, achieving accuracies of 95%. These data may be
> used to learn multimodal relationships, and to develop silent-speech and
> brain-computer interfaces.
>
>
> _______________________________________________
> Eeglablist page: http://sccn.ucsd.edu/eeglab/eeglabmail.html
> To unsubscribe, send an empty email to
> eeglablist-unsubscribe at sccn.ucsd.edu
> For digest mode, send an email with the subject "set digest mime" to
> eeglablist-request at sccn.ucsd.edu
>



-- 
Makoto Miyakoshi
Swartz Center for Computational Neuroscience
Institute for Neural Computation, University of California San Diego
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://sccn.ucsd.edu/pipermail/eeglablist/attachments/20150805/c0c8f337/attachment.html>


More information about the eeglablist mailing list