A network analysis of audiovisual affective speech perception

H. Jansma, A. Roebroeck, T.F. Münte

Research output: Contribution to journalArticleAcademicpeer-review

5 Citations (Scopus)

Abstract

In this study we were interested in the neural system supporting the audiovisual integration of emotional expression and emotional prosody. To this end normal participants were exposed to short videos of a computer-animated face voicing emotionally positive or negative words with the appropriate prosody. Facial expression of the face was either neutral or emotionally appropriate. To reveal the neural network involved in affective audio-visual (AV) integration, standard univariate analysis of fMRI data was followed by a Random-Effects Granger Causality Mapping (RFX-GCM). The regions that distinguished emotional from neutral facial expressions in the univariate analysis were taken as seed regions. In trials showing emotional expressions compared to neutral trials univariate analysis showed activation primarily in bilateral amygdala, Fusiform Gyrus, Middle Temporal Gyrus / Superior Temporal Sulcus and Inferior Occipital Gyrus. When employing either the left amygdala or the right amygdala as a seed region in RFX-GCM we found connectivity with the right hemispheric Fusiform Gyrus, with the indication that the Fusiform Gyrus sends information to the Amygdala. These results led to a working model for face perception in general and for audio-visual-affective integration in particular which is an elaborated adaptation of existing models.
Original languageEnglish
Pages (from-to)230-241
Number of pages12
JournalNeuroscience
Volume256
DOIs
Publication statusPublished - 3 Jan 2014

Keywords

  • audiovisual speech
  • emotion
  • facial affect perception
  • amygdala
  • Granger causality
  • SUPERIOR TEMPORAL SULCUS
  • EVENT-RELATED FMRI
  • TIME-RESOLVED FMRI
  • FUSIFORM FACE AREA
  • FACIAL EXPRESSIONS
  • HUMAN BRAIN
  • NEURAL RESPONSES
  • HUMAN AMYGDALA
  • EFFECTIVE CONNECTIVITY
  • CROSSMODAL BINDING

Cite this

Jansma, H. ; Roebroeck, A. ; Münte, T.F. / A network analysis of audiovisual affective speech perception. In: Neuroscience. 2014 ; Vol. 256. pp. 230-241.
@article{674e9a4cc7f04d61b19aeb8f8836d96c,
title = "A network analysis of audiovisual affective speech perception",
abstract = "In this study we were interested in the neural system supporting the audiovisual integration of emotional expression and emotional prosody. To this end normal participants were exposed to short videos of a computer-animated face voicing emotionally positive or negative words with the appropriate prosody. Facial expression of the face was either neutral or emotionally appropriate. To reveal the neural network involved in affective audio-visual (AV) integration, standard univariate analysis of fMRI data was followed by a Random-Effects Granger Causality Mapping (RFX-GCM). The regions that distinguished emotional from neutral facial expressions in the univariate analysis were taken as seed regions. In trials showing emotional expressions compared to neutral trials univariate analysis showed activation primarily in bilateral amygdala, Fusiform Gyrus, Middle Temporal Gyrus / Superior Temporal Sulcus and Inferior Occipital Gyrus. When employing either the left amygdala or the right amygdala as a seed region in RFX-GCM we found connectivity with the right hemispheric Fusiform Gyrus, with the indication that the Fusiform Gyrus sends information to the Amygdala. These results led to a working model for face perception in general and for audio-visual-affective integration in particular which is an elaborated adaptation of existing models.",
keywords = "audiovisual speech, emotion, facial affect perception, amygdala, Granger causality, SUPERIOR TEMPORAL SULCUS, EVENT-RELATED FMRI, TIME-RESOLVED FMRI, FUSIFORM FACE AREA, FACIAL EXPRESSIONS, HUMAN BRAIN, NEURAL RESPONSES, HUMAN AMYGDALA, EFFECTIVE CONNECTIVITY, CROSSMODAL BINDING",
author = "H. Jansma and A. Roebroeck and T.F. M{\"u}nte",
year = "2014",
month = "1",
day = "3",
doi = "10.1016/j.neuroscience.2013.10.047",
language = "English",
volume = "256",
pages = "230--241",
journal = "Neuroscience",
issn = "0306-4522",
publisher = "Elsevier Science",

}

A network analysis of audiovisual affective speech perception. / Jansma, H.; Roebroeck, A.; Münte, T.F.

In: Neuroscience, Vol. 256, 03.01.2014, p. 230-241.

Research output: Contribution to journalArticleAcademicpeer-review

TY - JOUR

T1 - A network analysis of audiovisual affective speech perception

AU - Jansma, H.

AU - Roebroeck, A.

AU - Münte, T.F.

PY - 2014/1/3

Y1 - 2014/1/3

N2 - In this study we were interested in the neural system supporting the audiovisual integration of emotional expression and emotional prosody. To this end normal participants were exposed to short videos of a computer-animated face voicing emotionally positive or negative words with the appropriate prosody. Facial expression of the face was either neutral or emotionally appropriate. To reveal the neural network involved in affective audio-visual (AV) integration, standard univariate analysis of fMRI data was followed by a Random-Effects Granger Causality Mapping (RFX-GCM). The regions that distinguished emotional from neutral facial expressions in the univariate analysis were taken as seed regions. In trials showing emotional expressions compared to neutral trials univariate analysis showed activation primarily in bilateral amygdala, Fusiform Gyrus, Middle Temporal Gyrus / Superior Temporal Sulcus and Inferior Occipital Gyrus. When employing either the left amygdala or the right amygdala as a seed region in RFX-GCM we found connectivity with the right hemispheric Fusiform Gyrus, with the indication that the Fusiform Gyrus sends information to the Amygdala. These results led to a working model for face perception in general and for audio-visual-affective integration in particular which is an elaborated adaptation of existing models.

AB - In this study we were interested in the neural system supporting the audiovisual integration of emotional expression and emotional prosody. To this end normal participants were exposed to short videos of a computer-animated face voicing emotionally positive or negative words with the appropriate prosody. Facial expression of the face was either neutral or emotionally appropriate. To reveal the neural network involved in affective audio-visual (AV) integration, standard univariate analysis of fMRI data was followed by a Random-Effects Granger Causality Mapping (RFX-GCM). The regions that distinguished emotional from neutral facial expressions in the univariate analysis were taken as seed regions. In trials showing emotional expressions compared to neutral trials univariate analysis showed activation primarily in bilateral amygdala, Fusiform Gyrus, Middle Temporal Gyrus / Superior Temporal Sulcus and Inferior Occipital Gyrus. When employing either the left amygdala or the right amygdala as a seed region in RFX-GCM we found connectivity with the right hemispheric Fusiform Gyrus, with the indication that the Fusiform Gyrus sends information to the Amygdala. These results led to a working model for face perception in general and for audio-visual-affective integration in particular which is an elaborated adaptation of existing models.

KW - audiovisual speech

KW - emotion

KW - facial affect perception

KW - amygdala

KW - Granger causality

KW - SUPERIOR TEMPORAL SULCUS

KW - EVENT-RELATED FMRI

KW - TIME-RESOLVED FMRI

KW - FUSIFORM FACE AREA

KW - FACIAL EXPRESSIONS

KW - HUMAN BRAIN

KW - NEURAL RESPONSES

KW - HUMAN AMYGDALA

KW - EFFECTIVE CONNECTIVITY

KW - CROSSMODAL BINDING

U2 - 10.1016/j.neuroscience.2013.10.047

DO - 10.1016/j.neuroscience.2013.10.047

M3 - Article

VL - 256

SP - 230

EP - 241

JO - Neuroscience

JF - Neuroscience

SN - 0306-4522

ER -