I’ve recently crossed paths with this video of the NCS2020 ″Decoding the neural processing of speech” talk by Tobias Reichenbach, PhD from the Imperial College London, held last February at the Institute of Neurosciences of the University of Barcelona.
You can read a wee-abstract of the video before watching it:
ABSTRACT. Understanding speech in noisy backgrounds requires selective attention to a particular speaker. Humans excel at this challenging task, while current speech recognition technology still struggles when background noise is loud. The neural mechanisms by which we process speech remain, however, poorly understood, not least due to the complexity of natural speech. Here we describe recent progress obtained through applying machine-learning to neuroimaging data of humans listening to speech in different types of background noise. In particular, we develop statistical models to relate characteristic features of speech such as pitch, amplitude fluctuations and linguistic surprisal to neural measurements. We find neural correlates of speech processing both at the subcortical level, related to the pitch, as well as at the cortical level, related to amplitude fluctuations and linguistic structures. We also show that some of these measures allow diagnosing disorders of consciousness. Our findings may be applied in smart hearing aids that automatically adjust speech processing to assist a user, as well as in diagnostics of brain disorders.
Here it goes!
Feature image from Pexels – C0 license.