Face analysis and body language understanding from egocentric cameras (short paper)

Ronja Möller, Antonino Furnari, Sebastiano Battiato, Aki Härmä, Giovanni Maria Farinella

Research output: Contribution to journalConference article in journalAcademicpeer-review

Abstract

The objective of the project described in this position paper is to develop and evaluate algorithms that enable a mobile agent, e.g., a robot, to observe a user during his/her day to day activities and infer relevant information which could help improve human-machine interaction. To achieve this goal we will first explore intelligent navigation strategies. The overall focus will be on visual data, analysing the user's action, face and body language. Once the algorithms run on the robot, they can be used to log user activity/emotional states and support them during daily activities. The collected information of the users will be useful for further analysis by healthcare professionals or assistive applications. In addition to the mentioned domains, attention will also be paid to speech analysis and synthesis to ensure natural interaction with the user. The algorithms will be able to infer age, gender, emotions, activity and body language of the user. Lastly, information obtained by First Person Vision Systems worn by a user will be considered as an external source of data to make more accurate inferences and explore possible correlations.

Original languageEnglish
Pages (from-to)84-91
Number of pages8
JournalCEUR Workshop Proceedings
Volume2596
Publication statusPublished - 2020
Externally publishedYes
Event1st Workshop on Smart Personal Health Interfaces - Cagliari, Italy
Duration: 17 Mar 202017 Mar 2020
Conference number: 1

Keywords

  • Computer Vision
  • Human Robot Interaction
  • Machine Learning

Fingerprint

Dive into the research topics of 'Face analysis and body language understanding from egocentric cameras (short paper)'. Together they form a unique fingerprint.

Cite this