Multimodal emotion recognition from expressive faces, body gestures and speech

George Caridakis, Ginevra Castellano, Loic Kessous, Amaryllis Raouzaiou, Lori Malatesta, Stelios Asteriadis, Kostas Karpouzis

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingAcademicpeer-review

Abstract

In this paper we present a multimodal approach for the recognition of eight emotions that integrates information from facial expressions, body movement and gestures and speech. We trained and tested a model with a bayesian classifier, using a multimodal corpus with eight emotions and ten subjects. First individual classifiers were trained for each modality. Then data were fused at the feature level and the decision level. Fusing multimodal data increased very much the recognition rates in comparison with the unimodal systems: the multimodal approach gave an improvement of more than 10% with respect to the most successful unimodal system. Further, the fusion performed at the feature level showed better results than the one performed at the decision level.keywordsaffective body languageaffective speechemotion recognitionmultimodal fusion.
Original languageEnglish
Title of host publicationArtificial Intelligence and Innovations 2007: from Theory to Applications
Subtitle of host publicationProceedings of the 4th IFIP International Conference on Artificial Intelligence Applications and Innovations (AIAI 2007)
PublisherSpringer
Pages375-388
Number of pages14
ISBN (Electronic)9780387741611
ISBN (Print)9780387741604
DOIs
Publication statusPublished - 2007
Externally publishedYes

Publication series

SeriesIFIP Advances in Information and Communication Technology
Volume247
ISSN1868-4238

Fingerprint

Dive into the research topics of 'Multimodal emotion recognition from expressive faces, body gestures and speech'. Together they form a unique fingerprint.

Cite this