Abstract
In face-to-face dialogues, the form-meaning relationship of co-speech gestures varies depending on contextual factors such as what the gestures refer to and the individual characteristics of speakers. These factors make co-speech gesture representation learning challenging. How can we learn meaningful gestures representations considering gestures’ variability and relationship with speech? This paper tackles this challenge by employing self-supervised contrastive learning techniques to learn gesture representations from skeletal and speech information. We propose an approach that includes both unimodal and multimodal pre-training to ground gesture representations in co-occurring speech. For training, we utilize a face-to-face dialogue dataset rich with representational iconic gestures. We conduct thorough intrinsic evaluations of the learned representations through comparison with human-annotated pairwise gesture similarity. Moreover, we perform a diagnostic probing analysis to assess the possibility of recovering interpretable gesture features from the learned representations. Our results show a signifcant positive correlation with human-annotated gesture similarity and reveal that the similarity between the learned representations is consistent with well-motivated patterns related to the dynamics of dialogue interaction. Moreover, our fndings demonstrate that several features concerning the form of gestures can be recovered from the latent representations. Overall, this study shows that multimodal contrastive learning is a promising approach for learning gesture representations, which opens the door to using such representations in larger-scale gesture analysis studies.
Original language | English |
---|---|
Title of host publication | ICMI 2024 - Proceedings of the 26th International Conference on Multimodal Interaction |
Publisher | Association for Computing Machinery |
Pages | 274-283 |
Number of pages | 10 |
ISBN (Electronic) | 9798400704628 |
DOIs | |
Publication status | Published - 4 Nov 2024 |
Event | 26th International Conference on Multimodal Interaction 2024 - Crowne Plaza San José Conference Center, San Jose, Costa Rica Duration: 4 Nov 2024 → 8 Nov 2024 https://icmi.acm.org/2024/ |
Publication series
Series | ACM International Conference Proceeding Series |
---|
Conference
Conference | 26th International Conference on Multimodal Interaction 2024 |
---|---|
Abbreviated title | ICMI 2024 |
Country/Territory | Costa Rica |
City | San Jose |
Period | 4/11/24 → 8/11/24 |
Internet address |
Keywords
- diagnostic probing
- face-to-face dialogue
- Gesture analysis
- intrinsic evaluation
- representation learning