Abstract
Recent studies have demonstrated that it is possible to decode and synthesize acoustic speech directly from intracranial measurements of brain activity. A current major challenge is to extend the efficacy of this decoding to imagined speech processes toward the development of a practical speech neuroprosthesis for the disabled. The present study used intracranial brain recordings from participants that performed a speaking task consisting of overt, mouthed, and imagined speech trials. In order to better elucidate the unique neural features that contribute to the discrepancies between overt and imagined model performance, rather than directly comparing the performance of speech decoding models trained on respective speaking modes, this study developed and trained models that use neural data to discriminate between pairs of speaking modes. The results further support that, while there exists a common neural substrate across speech modes, there are also unique neural processes that differentiate speech modes.
Original language | English |
---|---|
Title of host publication | 11th International IEEE/EMBS Conference on Neural Engineering, NER 2023 - Proceedings |
Publisher | IEEE Computer Society |
Number of pages | 4 |
Volume | 2023-April |
Edition | 1 |
ISBN (Print) | 9781665462921 |
DOIs | |
Publication status | Published - 1 Jan 2023 |
Event | 11th International IEEE/EMBS Conference on Neural Engineering - Baltimore, United States Duration: 25 Apr 2023 → 27 Apr 2023 Conference number: 11 https://2023.ieee-ner.org/ |
Publication series
Series | International IEEE/EMBS Conference on Neural Engineering, NER |
---|---|
Volume | 2023-April |
ISSN | 1948-3546 |
Conference
Conference | 11th International IEEE/EMBS Conference on Neural Engineering |
---|---|
Abbreviated title | NER 2023 |
Country/Territory | United States |
City | Baltimore |
Period | 25/04/23 → 27/04/23 |
Internet address |