Abstract
Present work deals with the incorporation of non-manual cues in automatic sign language recognition. More specifically, eye gaze, head pose, and facial expressions are discussed in relation to their grammatical and syntactic function and means of including them in the recognition phase are investigated. Computer vision issues related to extracting facial features, eye gaze, and head pose cues are presented and classification approaches for incorporating these non-manual cues into the overall Sign Language recognition architecture are introduced.
Original language | English |
---|---|
Pages (from-to) | 37-46 |
Number of pages | 10 |
Journal | Personal and Ubiquitous Computing |
Volume | 18 |
Issue number | 1 |
Early online date | 26 Oct 2012 |
DOIs | |
Publication status | Published - 2014 |
Externally published | Yes |
Keywords
- Automatic sign language recognition
- Eye gaze
- Facial expressions
- Head pose