Qui Parle? Le cerveau et sa capacité d’identifer les voix

NeuroVoID: Les mécanismes cérébraux qui permettent d’identifier qui di quoi dans un dialogue

Pour comprendre une conversation entre plusieurs participants, il ne suffit pas de saisir les mots prononcés ; il faut également savoir qui dit quoi et à qui. Le cerveau humain est remarquablement doué pour identifier les locuteurs et tirer des conclusions à leur sujet, même à partir de fragments de discours très courts. Néanmoins, la manière dont nous suivons qui parle et dont nous intégrons cette information au message est encore mal comprise. Ce projet vise à faire la lumière sur cette question, en utilisant la magnétoencéphalographie pour enregistrer l’activité cérébrale en réponse à l’écoute de discussions à plusieurs locuteurs.

Keeping track of speakers in the brain

NeuroVoID: Keeping track of speakers in the brain

Understanding a conversation between multiple participants requires that we not only understand the words being spoken, but also who is saying what, and to whom. The human brain is remarkably adept at identifying speakers and making inferences about them on the basis of even very short fragments of speech. Nevertheless the way we keep track of who is speaking and how we integrate this information with the message is currently poorly understood. This project seeks to shed light on this question, using magnetoencephalography to record brain activity in response to listening to multi-talker discussions.

Audio-Motor Integration

Exploring audio-motor integration: a novel approach to overcoming hearing impairment

Human speech perception is robust in the face of adverse listening conditions, such as reverberation and background noise. Hearing loss imposes perceptual difficulty in speech perception by degrading incoming speech signals. It is an increasingly common disability and has large societal costs. Recent neuroimaging work has suggested that one mechanism of restoration of a degraded speech signal is couched in articulatory-motor representations. This project seeks to investigate, using multiple different methods, the neural substrates of degraded speech perception, and specifically the role of articulatory-motor representations. In addition to enhancing our understanding of the neurobiology of language, this can be applied to the development of novel training methods to enable hearing-impaired individuals to enhance and exploit articulatory-auditory links to improve their speech comprehension abilities.

Publications

Houweling, T., Becker, R., & Hervais-Adelman, A. (2023). Elevated pre-target EEG alpha power enhances the probability of comprehending weakly noise masked words and decreases the probability of comprehending strongly masked words. Brain Lang, 247, 105356. https://doi.org/10.1016/j.bandl.2023.105356

Houweling, T., Becker, R., & Hervais-Adelman, A. (2020). The noise-resilient brain: Resting-state oscillatory activity predicts words-in-noise recognition. Brain Lang, 202, 104727. doi:10.1016/j.bandl.2019.104727

Becker, R., & Hervais-Adelman, A. (2020). Resolving the Connectome, Spectrally-Specific Functional Connectivity Networks and Their Distinct Contributions to Behavior. eNeuro, 7(5), ENEURO.0101-0120.2020. doi:10.1523/eneuro.0101-20.2020>

In Utero Vocal Learning

Human Infants are born crying with an accent that reflects the pitch accent of the language of the environment in which they gestate.

This strand of research concerns itself with this from two perspectives:

  1. How does a fetus learn to implement an accent, despite having no opportunity to practice vocalisation in utero?
  2. Is there a benefit to a newborn of crying with a “native” accent?

In order to address these questions we are undertaking an SNF-funded investigation to examine the acoustic features of infants’ cries that make them more or less aversive or salient to potential caregivers and the in utero cerebral processes that underpin acquisition of early precursors of human speech.

For further details and information, click here

Publications

Hervais-Adelman, A., & Townsend, S. W. (2025). How did vocal communication come to dominate human language? A view from the womb. PLoS Biol, 23(4), e3003141. https://doi.org/10.1371/journal.pbio.3003141

The fleeting adjective - The neural basis of modification

Syntactic analyses across typologically distinct languages indicate that there is a number of parts of speech (PoS) with distinct functional roles, for instance: nouns (reference), verbs (predication) and adjectives (modification). Multiple cognitive neuroscience studies, that explored functional distinction of PoS, have distinguished between neural representations of prototypical nouns and verbs. In contrast to nouns and verbs, adjectives exhibit a rather variable syntactic behaviour across languages. Basque adjectives, for instance, often resemble nouns in their functional role, while in Mandarin Chinese modification is frequently realized by means of verbs. Due to such variable behaviour, the existence of adjectives as a distinct cross-linguistic category has been challenged. The present project aims (i) to advance our present understanding of the neural representation and organisation of adjectives employing neurolinguistic and computational approaches; and (ii) to determine whether the PoS information is predictive of the way the brain processes linguistic stimuli. An MEG study across three typologically different languages (English, Basque and Mandarin Chinese) will elucidate neurobiological correlates of the core PoS and will investigate the contribution of PoS information to the phrase structure building.

Validating Silent Functional MRI

Magnetic resonance imaging is a notoriously noisy technique. This can lead to issues with respect to participant comfort and for all studies that require participants to listen to stimuli, e.g. in investigations of speech or music. In this project, we evaluated a novel fMRI sequence: “LoopingSTAR”, which is designed to capture brain activity while producing minimal acoustic noise. We were able to demonstrate that this is a very promising technique that has similar signal to noise ratio as acoustically noisy Echo Planar Imaging (EPI) the most typically used functional MRI methodology.

Publications

Ritter, C. J., Husser, A. M., Jakab, A., Wiesinger, F., Solana, A. B., Fernandez, B., Swanborough, H., O’Gorman Tuura, R., & Hervais-Adelman, A. (2026). Listening Without the Noise: Near-Silent Looping Star fMRI Reveals Neural Processing of Degraded Speech. Hum Brain Mapp, 47(4), e70501. https://doi.org/10.1002/hbm.70501