Current work focuses on user modeling in terms of affective analysis that could in turn be used in intelligent personalized interfaces and systems, dynamic profiling and context-aware multimedia applications. The analysis performed within this work comprises of statistical processing and classification of automatically extracted gestural and head pose expressivity features. Computational formulation of qualitative expressive cues of body and head motion is performed and the resulting features are processed statistically, their correlation is studied and finally an emotion recognition attempt is presented based on these features. Significant emotion specific patterns and expressivity features interrelations are derived while the emotion recognition results indicate that the gestural and head pose expressivity features could supplement and enhance a multimodal affective analysis system incorporating an additional modality to be fused with other commonly used modalities such as facial expressions, prosodic and lexical acoustic features and physiological measurements. © 2010 IEEE.
|Title of host publication||Proceedings - 2010 5th International Workshop on Semantic Media Adaptation and Personalization, SMAP 2010|
|Number of pages||6|
|Publication status||Published - 2010|
|Series||Proceedings - 2010 5th International Workshop on Semantic Media Adaptation and Personalization, SMAP 2010|