Emotional state dependence facilitates automatic imitation of visual speech

Jasmine Virhia, Sonja Kotz, Patti Adank

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Observing someone speak automatically triggers cognitive and neural mechanisms required to produce speech, a phenomenon known as automatic imitation. Automatic imitation of speech can be measured using the Stimulus Response Compatibility (SRC) paradigm that shows facilitated response times (RTs) when responding to a prompt (e.g., say aa) in the presence of a congruent distracter (a video of someone saying aa), compared with responding in the presence of an incongruent distracter (a video of someone saying oo). Current models of the relation between emotion and cognitive control suggest that automatic imitation can be modulated by varying the stimulus-driven task aspects, that is, the distracter's emotional valence. It is unclear how the emotional state of the observer affects automatic imitation. The current study explored independent effects of emotional valence of the distracter (Stimulus-driven Dependence) and the observer's emotional state (State Dependence) on automatic imitation of speech. Participants completed an SRC paradigm for visual speech stimuli. They produced a prompt superimposed over a neutral or emotional (happy or angry) distracter video. State Dependence was manipulated by asking participants to speak the prompt in a neutral or emotional (happy or angry) voice. Automatic imitation was facilitated for emotional prompts, but not for emotional distracters, thus implying a facilitating effect of State Dependence. The results are interpreted in the context of theories of automatic imitation and cognitive control, and we suggest that models of automatic imitation are to be modified to accommodate for state-dependent and stimulus-driven dependent effects.

Original languageEnglish
Pages (from-to)2833-2847
Number of pages15
JournalQuarterly Journal of Experimental Psychology
Volume72
Issue number12
Early online date22 Jul 2019
DOIs
Publication statusPublished - Dec 2019

Keywords

  • BEHAVIOR
  • COGNITIVE CONTROL
  • COMPATIBILITY
  • DISTORTION
  • EXCITABILITY
  • FACIAL MIMICRY
  • Imitation
  • MODULATION
  • RECOGNITION
  • REPRESENTATIONS
  • RESPONSES
  • control
  • emotion
  • speech production

Cite this

@article{caaf8304da934ddf829d66b1c355c861,
title = "Emotional state dependence facilitates automatic imitation of visual speech",
abstract = "Observing someone speak automatically triggers cognitive and neural mechanisms required to produce speech, a phenomenon known as automatic imitation. Automatic imitation of speech can be measured using the Stimulus Response Compatibility (SRC) paradigm that shows facilitated response times (RTs) when responding to a prompt (e.g., say aa) in the presence of a congruent distracter (a video of someone saying aa), compared with responding in the presence of an incongruent distracter (a video of someone saying oo). Current models of the relation between emotion and cognitive control suggest that automatic imitation can be modulated by varying the stimulus-driven task aspects, that is, the distracter's emotional valence. It is unclear how the emotional state of the observer affects automatic imitation. The current study explored independent effects of emotional valence of the distracter (Stimulus-driven Dependence) and the observer's emotional state (State Dependence) on automatic imitation of speech. Participants completed an SRC paradigm for visual speech stimuli. They produced a prompt superimposed over a neutral or emotional (happy or angry) distracter video. State Dependence was manipulated by asking participants to speak the prompt in a neutral or emotional (happy or angry) voice. Automatic imitation was facilitated for emotional prompts, but not for emotional distracters, thus implying a facilitating effect of State Dependence. The results are interpreted in the context of theories of automatic imitation and cognitive control, and we suggest that models of automatic imitation are to be modified to accommodate for state-dependent and stimulus-driven dependent effects.",
keywords = "BEHAVIOR, COGNITIVE CONTROL, COMPATIBILITY, DISTORTION, EXCITABILITY, FACIAL MIMICRY, Imitation, MODULATION, RECOGNITION, REPRESENTATIONS, RESPONSES, control, emotion, speech production",
author = "Jasmine Virhia and Sonja Kotz and Patti Adank",
year = "2019",
month = "12",
doi = "10.1177/1747021819867856",
language = "English",
volume = "72",
pages = "2833--2847",
journal = "Quarterly Journal of Experimental Psychology",
issn = "1747-0218",
publisher = "Psychology Press Ltd",
number = "12",

}

Emotional state dependence facilitates automatic imitation of visual speech. / Virhia, Jasmine; Kotz, Sonja; Adank, Patti.

In: Quarterly Journal of Experimental Psychology, Vol. 72, No. 12, 12.2019, p. 2833-2847.

Research output: Contribution to journalArticleAcademicpeer-review

TY - JOUR

T1 - Emotional state dependence facilitates automatic imitation of visual speech

AU - Virhia, Jasmine

AU - Kotz, Sonja

AU - Adank, Patti

PY - 2019/12

Y1 - 2019/12

N2 - Observing someone speak automatically triggers cognitive and neural mechanisms required to produce speech, a phenomenon known as automatic imitation. Automatic imitation of speech can be measured using the Stimulus Response Compatibility (SRC) paradigm that shows facilitated response times (RTs) when responding to a prompt (e.g., say aa) in the presence of a congruent distracter (a video of someone saying aa), compared with responding in the presence of an incongruent distracter (a video of someone saying oo). Current models of the relation between emotion and cognitive control suggest that automatic imitation can be modulated by varying the stimulus-driven task aspects, that is, the distracter's emotional valence. It is unclear how the emotional state of the observer affects automatic imitation. The current study explored independent effects of emotional valence of the distracter (Stimulus-driven Dependence) and the observer's emotional state (State Dependence) on automatic imitation of speech. Participants completed an SRC paradigm for visual speech stimuli. They produced a prompt superimposed over a neutral or emotional (happy or angry) distracter video. State Dependence was manipulated by asking participants to speak the prompt in a neutral or emotional (happy or angry) voice. Automatic imitation was facilitated for emotional prompts, but not for emotional distracters, thus implying a facilitating effect of State Dependence. The results are interpreted in the context of theories of automatic imitation and cognitive control, and we suggest that models of automatic imitation are to be modified to accommodate for state-dependent and stimulus-driven dependent effects.

AB - Observing someone speak automatically triggers cognitive and neural mechanisms required to produce speech, a phenomenon known as automatic imitation. Automatic imitation of speech can be measured using the Stimulus Response Compatibility (SRC) paradigm that shows facilitated response times (RTs) when responding to a prompt (e.g., say aa) in the presence of a congruent distracter (a video of someone saying aa), compared with responding in the presence of an incongruent distracter (a video of someone saying oo). Current models of the relation between emotion and cognitive control suggest that automatic imitation can be modulated by varying the stimulus-driven task aspects, that is, the distracter's emotional valence. It is unclear how the emotional state of the observer affects automatic imitation. The current study explored independent effects of emotional valence of the distracter (Stimulus-driven Dependence) and the observer's emotional state (State Dependence) on automatic imitation of speech. Participants completed an SRC paradigm for visual speech stimuli. They produced a prompt superimposed over a neutral or emotional (happy or angry) distracter video. State Dependence was manipulated by asking participants to speak the prompt in a neutral or emotional (happy or angry) voice. Automatic imitation was facilitated for emotional prompts, but not for emotional distracters, thus implying a facilitating effect of State Dependence. The results are interpreted in the context of theories of automatic imitation and cognitive control, and we suggest that models of automatic imitation are to be modified to accommodate for state-dependent and stimulus-driven dependent effects.

KW - BEHAVIOR

KW - COGNITIVE CONTROL

KW - COMPATIBILITY

KW - DISTORTION

KW - EXCITABILITY

KW - FACIAL MIMICRY

KW - Imitation

KW - MODULATION

KW - RECOGNITION

KW - REPRESENTATIONS

KW - RESPONSES

KW - control

KW - emotion

KW - speech production

U2 - 10.1177/1747021819867856

DO - 10.1177/1747021819867856

M3 - Article

C2 - 31331238

VL - 72

SP - 2833

EP - 2847

JO - Quarterly Journal of Experimental Psychology

JF - Quarterly Journal of Experimental Psychology

SN - 1747-0218

IS - 12

ER -