Dynamic facial expressions prime the processing of emotional prosody

Patricia Garrido-Vásquez, Marc D Pell, Silke Paulmann, Sonja A Kotz

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Evidence suggests that emotion is represented supramodally in the human brain. Emotional facial expressions, which often precede vocally expressed emotion in real life, can modulate event-related potentials (N100 and P200) during emotional prosody processing. To investigate these cross-modal emotional interactions, two lines of research have been put forward: cross-modal integration and cross-modal priming. In cross-modal integration studies, visual and auditory channels are temporally aligned, while in priming studies they are presented consecutively. Here we used cross-modal emotional priming to study the interaction of dynamic visual and auditory emotional information. Specifically, we presented dynamic facial expressions (angry, happy, neutral) as primes and emotionally-intoned pseudo-speech sentences (angry, happy) as targets. We were interested in how prime-target congruency would affect early auditory event-related potentials, i.e., N100 and P200, in order to shed more light on how dynamic facial information is used in cross-modal emotional prediction. Results showed enhanced N100 amplitudes for incongruently primed compared to congruently and neutrally primed emotional prosody, while the latter two conditions did not significantly differ. However, N100 peak latency was significantly delayed in the neutral condition compared to the other two conditions. Source reconstruction revealed that the right parahippocampal gyrus was activated in incongruent compared to congruent trials in the N100 time window. No significant ERP effects were observed in the P200 range. Our results indicate that dynamic facial expressions influence vocal emotion processing at an early point in time, and that an emotional mismatch between a facial expression and its ensuing vocal emotional signal induces additional processing costs in the brain, potentially because the cross-modal emotional prediction mechanism is violated in case of emotional prime-target incongruency.

Original languageEnglish
Article number244
Number of pages11
JournalFrontiers in Human Neuroscience
Volume12
DOIs
Publication statusPublished - 12 Jun 2018

Keywords

  • Journal Article
  • priming
  • INFORMATION
  • REPRESENTATION
  • prosody
  • audiovisual
  • parahippocampal gyrus
  • TIME-COURSE
  • PERCEPTION
  • VOICE
  • emotion
  • EVENT-RELATED POTENTIALS
  • AUDIOVISUAL INTEGRATION
  • BRAIN POTENTIALS
  • event-related potentials
  • cross-modal prediction
  • SPEECH PROSODY
  • FACE
  • dynamic faces

Cite this

Garrido-Vásquez, Patricia ; Pell, Marc D ; Paulmann, Silke ; Kotz, Sonja A. / Dynamic facial expressions prime the processing of emotional prosody. In: Frontiers in Human Neuroscience. 2018 ; Vol. 12.
@article{3425d76a26ec466ea80ac6c960778baa,
title = "Dynamic facial expressions prime the processing of emotional prosody",
abstract = "Evidence suggests that emotion is represented supramodally in the human brain. Emotional facial expressions, which often precede vocally expressed emotion in real life, can modulate event-related potentials (N100 and P200) during emotional prosody processing. To investigate these cross-modal emotional interactions, two lines of research have been put forward: cross-modal integration and cross-modal priming. In cross-modal integration studies, visual and auditory channels are temporally aligned, while in priming studies they are presented consecutively. Here we used cross-modal emotional priming to study the interaction of dynamic visual and auditory emotional information. Specifically, we presented dynamic facial expressions (angry, happy, neutral) as primes and emotionally-intoned pseudo-speech sentences (angry, happy) as targets. We were interested in how prime-target congruency would affect early auditory event-related potentials, i.e., N100 and P200, in order to shed more light on how dynamic facial information is used in cross-modal emotional prediction. Results showed enhanced N100 amplitudes for incongruently primed compared to congruently and neutrally primed emotional prosody, while the latter two conditions did not significantly differ. However, N100 peak latency was significantly delayed in the neutral condition compared to the other two conditions. Source reconstruction revealed that the right parahippocampal gyrus was activated in incongruent compared to congruent trials in the N100 time window. No significant ERP effects were observed in the P200 range. Our results indicate that dynamic facial expressions influence vocal emotion processing at an early point in time, and that an emotional mismatch between a facial expression and its ensuing vocal emotional signal induces additional processing costs in the brain, potentially because the cross-modal emotional prediction mechanism is violated in case of emotional prime-target incongruency.",
keywords = "Journal Article, priming, INFORMATION, REPRESENTATION, prosody, audiovisual, parahippocampal gyrus, TIME-COURSE, PERCEPTION, VOICE, emotion, EVENT-RELATED POTENTIALS, AUDIOVISUAL INTEGRATION, BRAIN POTENTIALS, event-related potentials, cross-modal prediction, SPEECH PROSODY, FACE, dynamic faces",
author = "Patricia Garrido-V{\'a}squez and Pell, {Marc D} and Silke Paulmann and Kotz, {Sonja A}",
year = "2018",
month = "6",
day = "12",
doi = "10.3389/fnhum.2018.00244",
language = "English",
volume = "12",
journal = "Frontiers in Human Neuroscience",
issn = "1662-5161",
publisher = "Frontiers Media S.A.",

}

Dynamic facial expressions prime the processing of emotional prosody. / Garrido-Vásquez, Patricia; Pell, Marc D; Paulmann, Silke; Kotz, Sonja A.

In: Frontiers in Human Neuroscience, Vol. 12, 244, 12.06.2018.

Research output: Contribution to journalArticleAcademicpeer-review

TY - JOUR

T1 - Dynamic facial expressions prime the processing of emotional prosody

AU - Garrido-Vásquez, Patricia

AU - Pell, Marc D

AU - Paulmann, Silke

AU - Kotz, Sonja A

PY - 2018/6/12

Y1 - 2018/6/12

N2 - Evidence suggests that emotion is represented supramodally in the human brain. Emotional facial expressions, which often precede vocally expressed emotion in real life, can modulate event-related potentials (N100 and P200) during emotional prosody processing. To investigate these cross-modal emotional interactions, two lines of research have been put forward: cross-modal integration and cross-modal priming. In cross-modal integration studies, visual and auditory channels are temporally aligned, while in priming studies they are presented consecutively. Here we used cross-modal emotional priming to study the interaction of dynamic visual and auditory emotional information. Specifically, we presented dynamic facial expressions (angry, happy, neutral) as primes and emotionally-intoned pseudo-speech sentences (angry, happy) as targets. We were interested in how prime-target congruency would affect early auditory event-related potentials, i.e., N100 and P200, in order to shed more light on how dynamic facial information is used in cross-modal emotional prediction. Results showed enhanced N100 amplitudes for incongruently primed compared to congruently and neutrally primed emotional prosody, while the latter two conditions did not significantly differ. However, N100 peak latency was significantly delayed in the neutral condition compared to the other two conditions. Source reconstruction revealed that the right parahippocampal gyrus was activated in incongruent compared to congruent trials in the N100 time window. No significant ERP effects were observed in the P200 range. Our results indicate that dynamic facial expressions influence vocal emotion processing at an early point in time, and that an emotional mismatch between a facial expression and its ensuing vocal emotional signal induces additional processing costs in the brain, potentially because the cross-modal emotional prediction mechanism is violated in case of emotional prime-target incongruency.

AB - Evidence suggests that emotion is represented supramodally in the human brain. Emotional facial expressions, which often precede vocally expressed emotion in real life, can modulate event-related potentials (N100 and P200) during emotional prosody processing. To investigate these cross-modal emotional interactions, two lines of research have been put forward: cross-modal integration and cross-modal priming. In cross-modal integration studies, visual and auditory channels are temporally aligned, while in priming studies they are presented consecutively. Here we used cross-modal emotional priming to study the interaction of dynamic visual and auditory emotional information. Specifically, we presented dynamic facial expressions (angry, happy, neutral) as primes and emotionally-intoned pseudo-speech sentences (angry, happy) as targets. We were interested in how prime-target congruency would affect early auditory event-related potentials, i.e., N100 and P200, in order to shed more light on how dynamic facial information is used in cross-modal emotional prediction. Results showed enhanced N100 amplitudes for incongruently primed compared to congruently and neutrally primed emotional prosody, while the latter two conditions did not significantly differ. However, N100 peak latency was significantly delayed in the neutral condition compared to the other two conditions. Source reconstruction revealed that the right parahippocampal gyrus was activated in incongruent compared to congruent trials in the N100 time window. No significant ERP effects were observed in the P200 range. Our results indicate that dynamic facial expressions influence vocal emotion processing at an early point in time, and that an emotional mismatch between a facial expression and its ensuing vocal emotional signal induces additional processing costs in the brain, potentially because the cross-modal emotional prediction mechanism is violated in case of emotional prime-target incongruency.

KW - Journal Article

KW - priming

KW - INFORMATION

KW - REPRESENTATION

KW - prosody

KW - audiovisual

KW - parahippocampal gyrus

KW - TIME-COURSE

KW - PERCEPTION

KW - VOICE

KW - emotion

KW - EVENT-RELATED POTENTIALS

KW - AUDIOVISUAL INTEGRATION

KW - BRAIN POTENTIALS

KW - event-related potentials

KW - cross-modal prediction

KW - SPEECH PROSODY

KW - FACE

KW - dynamic faces

U2 - 10.3389/fnhum.2018.00244

DO - 10.3389/fnhum.2018.00244

M3 - Article

C2 - 29946247

VL - 12

JO - Frontiers in Human Neuroscience

JF - Frontiers in Human Neuroscience

SN - 1662-5161

M1 - 244

ER -