Multimodal Fusion based on Information Gain for Emotion Recognition in the Wild

E. Ghaleb*, M. Popa, E. Hortal, S. Asteriadis

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingAcademicpeer-review

36 Downloads (Pure)

Abstract

In this paper we present a novel approach towards multi-modal emotion recognition on a challenging dataset AFEW'16, composed of video clips labeled with the six basic emotions plus the neutral state. After a preprocessing stage, we employ different feature extraction techniques (CNN, DSIFT on face and facial ROI, geometric and audio based) and encoded frame-based features using Fisher vector representations. Next, we leverage the properties of each modality using different fusion schemes. Apart from the early-level fusion and the decision level fusion approaches, we propose a hierarchical decision level method based on information gain principles and we optimize its parameters using genetic algorithms. The experimental results prove the suitability of our method, as we obtain 53.06% validation accuracy, surpassing by 14% the baseline of 38.81% on a challenging dataset, suitable for emotion recognition in the wild.
Original languageEnglish
Title of host publicationPROCEEDINGS OF THE 2017 INTELLIGENT SYSTEMS CONFERENCE (INTELLISYS)
PublisherIEEE
Pages814-823
Number of pages10
ISBN (Print)9781509064359
DOIs
Publication statusPublished - 2017
EventIntelligent Systems Conference (IntelliSys) - London, United Kingdom
Duration: 7 Sept 20178 Sept 2017
https://saiconference.com/Conferences/IntelliSys2017#:~:text=The%202017%20edition%20of%20IntelliSys,intelligent%20systems%2C%20technologies%20and%20applications.

Conference

ConferenceIntelligent Systems Conference (IntelliSys)
Abbreviated titleIntelliSys 2017
Country/TerritoryUnited Kingdom
CityLondon
Period7/09/178/09/17
Internet address

Keywords

  • Emotion recognition
  • multimodal fusion
  • information gain
  • genetic algorithm
  • EXPRESSION

Cite this