The neural bases of audio-visual benefit in speech perception

I used time-frequency and partial coherence analyses on MEG recordings of human listeners to demonstrated that understanding of degraded speech is improved by visual speech signals boosting neural synchronization to auditory speech. I also co-supervise a doctoral researcher who is currently investigating the individual differences in audio-visual benefit in speech perceptions and their underlying neural mechanisms in a normally hearing population. In the long run we are planning to extend these insights to hearing impaired individuals and explore ways to add support for visual speech in a hearing prosthesis. See below for associated publications, code and data.

Máté Aller
Máté Aller
Postdoctoral Research Associate

I am a cognitive computational neuroscientist investigating human speech perception with the aim of building better assistive speech technologies and AI speech recognition systems.