Multisensory integration in speech processing: neural mechanisms of cross-modal aftereffects

Nick Kilian-Hütten, Elia Formisano, J. Vroomen*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingChapterAcademic

19 Downloads (Pure)


Traditionally, perceptual neuroscience has focused on unimodal information processing. This is true also for investigations of speech processing, where the auditory modality was the natural focus of interest. Given the complexity of neuronal processing, this was a logical step, considering that the field was still in its infancy. However, it is clear that this restriction does not do justice to the way we perceive the world around us in everyday interactions. Very rarely is sensory information confined to one modality. Instead, we are constantly confronted with a stream of input to several or all senses and already in infancy, we match facial movements with their corresponding sounds (campbell et al. 2001; kuhl and meltzoff 1982). Moreover, the information that is processed by our individual senses does not stay separated. Rather, the different channels interact and influence each other, affecting perceptual interpretations and constructions (calvert 2001). Consequently, in the last 15–20 years, the perspective in cognitive science and perceptual neuroscience has shifted to include investigations of such multimodal integrative phenomena. Facilitating cross-modal effects have consistently been demonstrated behaviorally (shimojo and shams 2001). When multisensory input is congruent (e.g., semantically and/or temporally) it typically lowers detection thresholds (frassinetti et al. 2002), shortens reaction times (forster et al. 2002; schröger and widmann 1998), and decreases saccadic eye movement latencies (hughes et al. 1994) as compared to unimodal exposure. When incongruent input is (artificially) added in a second modality, this usually has opposite consequences (sekuler et al. 1997).
Original languageEnglish
Title of host publication Neural mechanisms of language
EditorsMaria Mody
Place of PublicationBoston, MA
ISBN (Electronic)9781493973255
ISBN (Print)9781493973231
Publication statusPublished - 2017

Publication series

SeriesInnovations in cognitive neuroscience


Dive into the research topics of 'Multisensory integration in speech processing: neural mechanisms of cross-modal aftereffects'. Together they form a unique fingerprint.

Cite this