Learning Co-Speech Gesture Representations in Dialogue through Contrastive Learning: An Intrinsic Evaluation

Esam Ghaleb, Bulat Khaertdinov, Wim Pouw, Marlou Rasenberg, Judith Holler, Asli Özyürek, Raquel Fernández

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingAcademicpeer-review

Abstract

In face-to-face dialogues, the form-meaning relationship of co-speech gestures varies depending on contextual factors such as what the gestures refer to and the individual characteristics of speakers. These factors make co-speech gesture representation learning challenging. How can we learn meaningful gestures representations considering gestures’ variability and relationship with speech? This paper tackles this challenge by employing self-supervised contrastive learning techniques to learn gesture representations from skeletal and speech information. We propose an approach that includes both unimodal and multimodal pre-training to ground gesture representations in co-occurring speech. For training, we utilize a face-to-face dialogue dataset rich with representational iconic gestures. We conduct thorough intrinsic evaluations of the learned representations through comparison with human-annotated pairwise gesture similarity. Moreover, we perform a diagnostic probing analysis to assess the possibility of recovering interpretable gesture features from the learned representations. Our results show a signifcant positive correlation with human-annotated gesture similarity and reveal that the similarity between the learned representations is consistent with well-motivated patterns related to the dynamics of dialogue interaction. Moreover, our fndings demonstrate that several features concerning the form of gestures can be recovered from the latent representations. Overall, this study shows that multimodal contrastive learning is a promising approach for learning gesture representations, which opens the door to using such representations in larger-scale gesture analysis studies.
Original languageEnglish
Title of host publicationICMI 2024 - Proceedings of the 26th International Conference on Multimodal Interaction
PublisherAssociation for Computing Machinery
Pages274-283
Number of pages10
ISBN (Electronic)9798400704628
DOIs
Publication statusPublished - 4 Nov 2024
Event26th International Conference on Multimodal Interaction 2024 - Crowne Plaza San José Conference Center, San Jose, Costa Rica
Duration: 4 Nov 20248 Nov 2024
https://icmi.acm.org/2024/

Publication series

SeriesACM International Conference Proceeding Series

Conference

Conference26th International Conference on Multimodal Interaction 2024
Abbreviated titleICMI 2024
Country/TerritoryCosta Rica
CitySan Jose
Period4/11/248/11/24
Internet address

Keywords

  • diagnostic probing
  • face-to-face dialogue
  • Gesture analysis
  • intrinsic evaluation
  • representation learning

Fingerprint

Dive into the research topics of 'Learning Co-Speech Gesture Representations in Dialogue through Contrastive Learning: An Intrinsic Evaluation'. Together they form a unique fingerprint.

Cite this