Bridging face and sound modalities through Domain Adaptation Metric Learning

Christos Athanasiadis*, Enrique Hortal, Stelios Asteriadis

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingAcademicpeer-review

6 Downloads (Pure)

Abstract

Robust emotion recognition systems require extensive training by employing huge number of training samples with purpose of generating sophisticated models. Furthermore, research is mostly focused on facial expression recognition due, mainly to, the wide availability of related datasets. However, the existence of rich and publicly available datasets is not the case for other modalities like sound and so forth. In this work, a heterogeneous domain adaptation framework is introduced for bridging two inherently different domains (namely face and audio). The purpose is to perform affect recognition on the modality where only a small amount of data is available, leveraging large amounts of data from another modality.

Original languageEnglish
Title of host publicationESANN 2019 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Bruges (Belgium), 24-26 April 2019
Pages385-390
ISBN (Electronic)978-287-587-065-0
Publication statusPublished - 2019
EventEuropean Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning - Bruges, Belgium
Duration: 24 Apr 201926 Apr 2019
Conference number: 27

Symposium

SymposiumEuropean Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning
Abbreviated titleESANN 2019
Country/TerritoryBelgium
CityBruges
Period24/04/1926/04/19

Cite this