Abstract
Recent work has shown that it is possible to decode aspects of continuously-spoken speech from electrocorticographic (ECoG) signals recorded on the cortical surface. The ultimate objective is to develop a speech neuroprosthetic that can provide seamless, real-time synthesis of continuous speech directly from brain activity. Instead of decoding acoustic properties or classes of speech, such a neuroprosthetic might be realized by decoding articulator movements associated with speech production, as recent work highlights a representation of articulator movement in ECoG signals. The aim of this work is to investigate the neural correlates of speech-related lip movements from video recordings. We present how characteristics of lip movement can be decoded and lip-landmark positions can be predicted.
Original language | English |
---|---|
Title of host publication | 2019 9TH INTERNATIONAL IEEE/EMBS CONFERENCE ON NEURAL ENGINEERING (NER) |
Publisher | IEEE Xplore |
Pages | 522-525 |
Number of pages | 4 |
DOIs | |
Publication status | Published - 1 Jan 2019 |