How the brain combines prior expectations with heard speech during speech perception?

Humans utilize prior expectations to comprehend speech, but overreliance on these expectations can induce perceptual illusions if they mismatch with incoming sensory information. These misperceptions are especially likely in noisy acoustic environments or when there is partial overlap between expectations and sensory signals, which explains why misheard song lyrics are so common. Perceptual misperceptions may be explained by different theories: sharpening schemes in which neural representations matching prior knowledge are enhanced, and prediction error theories in which neural representations encode the difference between prior knowledge and sensory signals (Aitchison & Lengyel 2017). A previous fMRI study supported representations of prediction errors during perception of degraded speech (Blank et al. 2018). This study aims to extend these findings to understand the time course of these computations during speech perception using MEG/EEG.

I’m collaborating on this project with Matt Davis (MRC-CBU Programme Leader) and Connor Doyle (MPhil student).

Máté Aller
Máté Aller
Postdoctoral Research Associate

I am a cognitive computational neuroscientist investigating human speech perception with the aim of building better assistive speech technologies and AI speech recognition systems.