The EEGLAB News #17

Dr. Matthew WisniewskiMatthew Wisniewski, Ph.D.

Assistant Professor, Psychological Sciences, Kansas State University



How can two people hear the same sound but have very different psychological experiences of that sound? Dr. Matthew Wisniewski, Assistant Professor in Psychological Sciences at Kansas State University, is fascinated by how experience shapes the way we hear the world. Initially motivated by a love of music, he fortuitously signed up for a Biology of Memory class in college, where he learned about plasticity in the auditory system. This was an 'aha' moment. From that point on, he has devoted his efforts to learning all he can about the brain and how it processes sound. EEGLAB is a key tool in his research.

Can you explain your research and what inspired you to study this topic?
Most of my research is focused on how experience shapes the way we hear the world. The same sound can hit your ears as it does mine, but we can have very different psychological experiences. For instance, you can probably think of a time where you had trouble understanding an accent that was foreign to you, while other people had no problem. Presumably, these others had learning experiences with the accent that gave them the privilege to hear details that you could not. My research is focused on understanding the mechanisms that underlie the learning needed to improve sound perception, and the ways we can optimize training to facilitate it.

How did you first become interested in your field? Was there an “aha” moment when you realized you wanted to study the brain?
For me, the interest started with music. I didn't see science as my future to start. Out of high school, I had strong mistaken beliefs that I had a future as a rock star. The first rule of being a rock star is to play music that your parents don't like. I nailed that. However, you also need to play music that some other people like. That is way harder. I then tried the sound engineer route by making music recordings; thinking that there could be a future in that. This is really difficult. I could hear that my recordings were bad and that the sound of professional recordings were good, but I could not for the life of me figure out how to make the transition. It involves listening out the fine spectral details of the recordings, adjusting their amplitude dynamics just right, and then mixing the different source signals together such that they blend well and don't overtake or mask each other.

I shadowed a professional sound engineer, as he would take a mediocre recording and spend hours turning knobs on expensive equipment and adjusting parameters in recording software. With each tiny change, I could not hear much difference at all in the way the music sounded. When he was done, he would switch back and forth between the original and the final version. There it was!... the difference between bad and good. It was bizarre that I still did not get (or hear) the transition even when it was happening right in front of me.

It was around this same time that I was taking a Biology of Memory class at SUNY Buffalo from Dr. Eduardo Mercado III. He covered extensively his research on plasticity in the auditory system of rats, where maps of complex sound features could be molded by different training experiences; and where the representational fidelity of sounds in these maps could predict behavioral discrimination performance. This was the 'aha' moment. The privilege to hear those fine-grained details, and probably produce them with your own music, lies in how the brain has been shaped to process sound. I wanted to know more about this. Dr. Mercado was just starting a human research lab then. My way in as a research assistant was that I had the skills to edit sounds for his experiments. I stayed on in his lab to get my Ph.D.

How did you find out about the Swartz Center for Computational Neuroscience (SCCN), and Dr. Makeig’s lab and research?

I really owe my career to Dr. Makeig and SCCN. Dr. Mercado was part of a large NSF Science of Learning Center along with Scott when I was in graduate school. This was the Temporal Dynamics of Learning Center (TDLC). I really wanted to learn EEG so that I could explore plasticity and the brain dynamics that surround auditory perception and learning. However, we did not have the expertise or the facilities at SUNY Buffalo to support my interest. Scott and SCCN graciously allowed me to spend the summers working at SCCN to learn. I had no EEG experience coming in. It was definitely a jump into the deep end to start at one of the most cutting-edge EEG research facilities in the world. The SCCN group, especially Scott, Klaus Gramann, Ying Wu, Arnaud Delorme, and Julie Onton, were very helpful and kind to me. The skills I learned eventually landed me my postdoc at the US Air Force Research Laboratory in Dayton, OH. There I was able to build an EEG facility for them and simultaneously start my own independent program of research. This eventually landed me my current position at KSU where they were looking for someone to build an EEG lab with a research program focused on cognitive and neural plasticity. It was a lot of happy accidents that all started at SCCN where they were kind enough to give someone so green some help and opportunity.

CNAP Center
The Cognitive and Neurobiological Approaches to Plasticity Center, or CNAP, at Kansas State University
(Dr. Wisniewski, third row up, furthest to the left. Photo credit: Kansas State University)

How does the EEGLAB help you in your research?
EEGLAB is used very frequently in my research and teaching duties at KSU. My favorite feature is 'EEG.history'. For my students, it very much helps with the transition from the GUI to command line, scripts, and functions. In my lab at KSU we program all experiments and analyses in MATLAB. I typically start students with running some simple EEG data processing in EEGLAB, then writing scripts to automate those analyses with the help of 'EEG.history'. It works really well as a teaching tool for students who are intimidated by coding.

We use EEGLAB at all stages of data processing, so of course all the importing, plotting, filtering, referencing, channel selection, etc. functions are incredibly useful. We find a lot of use out of ICA to examine independent brain-related ICs apart from each other. AMICA utility and ICLabel functions are extremely useful for this. We are also using the Neuroelectric Forward Head Modeling Toolbox (NFT) for dipole modeling of ICs.

My favorite function is newtimef(). There are so many features to it, and it is great in introducing my students to the intricacies of time-frequency analyses. My students need to learn this function well.

If I'm working with EEG, I'm using EEGLAB in one way or another.

A subject participates in a listening experiment in the Auditory Learning & Cognition LabHow do you combine the use of EEGLAB with other tools?

Recently, in a project looking at how processing pipelines could be optimized for ICA and the identification of ICs that characterize tau rhythms (i.e., auditory alpha rhythms), we needed to run a lot of parallel processing routines. Here, combining initial processing in EEGLAB with offline processing on KSU's supercomputer (mainly for running ICAs) was a huge time saver.

The plotting functions in EEGLAB are wonderful when combined with later image editing in graphics tools like Inkscape. We are frequently employing other MATLAB toolboxes (e.g., statistics toolbox) and custom functions to do joint behavioral/EEG analyses.

We also have been using individual MRIs to build more accurate head models for dipole modeling in NFT. (Photo: A subject participates in a listening experiment in the Auditory Learning & Cognition Lab)

What are some of the challenges in the field? What do you enjoy most?
One challenge I've been particularly interested in as of late, is that auditory system alpha oscillations (a.k.a. tau rhythms) have proven incredibly difficult to characterize with EEG. Intracranial recordings and MEG data now convincingly demonstrate that tau rhythms exist, but they tend to be masked by stronger alpha producing brain sources in scalp EEG recordings. We have ongoing work (w/ help from SCCN) investigating an ICA approach to tau rhythm analysis. Certainly, EEG is less restricting and easier to access than intracranial and MEG methods in research and clinical settings. A method to measure tau in EEG is going to increase the rate and range of discoveries related to tau.

Regarding auditory learning, the field certainly has enough empirical data to support the psychological phenomenon I described in my own experience. There are a number of ideas around the mechanisms that produce this (e.g., changed states of attention during listening, associative learning, plasticity in signal representations, etc.). Studies are converging on the conclusion that perceptual learning comes about through each of these mechanisms. The challenge is to build a hybrid model that can make specific predictions about how learning is impacted differently by the different mechanisms. Then we need to empirically assess those predictions. It is has also been a challenge (mainly because the empirical work is laregly absent) to predict how auditory learning impacts other aspects of cognition. Most work has been focused on changes in auditory perception itself, but how does the learning impact the mental effort needed for listening?... or to do another task at the same time?... or impact the fidelity of auditory memories?... or later real-world choices (e.g. preferences for music; whether or not to wear your hearing protection while working). Recent work in my lab is expanding in these directions, and I am getting a lot of enjoyment out of that.

What do you hope to accomplish in the next seven years?

I would like to have a firm idea as to whether the ICA approach can give us a measure of tau rhythms in EEG. If the answer is yes, I would like to be on the way to using this to advance auditory learning theory (e.g., by relating tau to changes in prestimulus attentional states that could occur with learning), and applying the measure in a clinically relevant manner (e.g., using neurofeedback to upregulate tau in disorders like tinnitus that are known to have abnormally low tau states). Also, tenure would be great.

What is the most important question you would like to answer in your lifetime?
Questions like this are so difficult to answer. It feels like the answer ends up being a snapshot of the moment. No matter what I come up with, I'm reminded that if I would have been asked the same question at the start of college, the answer would have been "How do you write a hit song with limited musical talent?". What will my future self feel about my answer today? I know what questions I want to have had a part in addressing: How can auditory training regimens be designed to optimize learning and its generalization? How does the brain balance plasticity with stability to make sense of the auditory world? How do auditory learning histories shape cognition beyond auditory perception? How can we use auditory training to produce meaningful results for real-world issues of auditory perception? That being said, I am completely open to future Matt stumbling on another "aha" moment and modifying his trajectory.

Matt, Ali, Nadia, and Monika at playWould you like to share any other thoughts? |
I met my wife in grad school (Ali Zakrzewski). She is also a cognitive psychologist working with EEG. I love that we get to work together every day. We just had our second daughter... they both love music! (Photo: Matt, Ali, Nadia, and Monika at play)

I need to say that Kansas State University's Cognitive and Neurobiological Approaches to Plasticity (CNAP) Center has provided a lot of support at this early stage of my career.

Finally, I am always looking for good graduate students to take on. If any readers are looking to go to graduate school for psychology and like what we are doing in the Auditory Learning & Cognition Lab at K-State, feel free to email me (


February 2024