The nested hierarchy of overt, mouthed, and imagined speech activity evident in intracranial recordings

P.Z. Soroush, C. Herff, S.K. Ries, J.J. Shih, T. Schultz, D.J. Krusienski*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Recent studies have demonstrated that it is possible to decode and synthesize various aspects of acoustic speech directly from intracranial measurements of electrophysiological brain activity. In order to continue progressing toward the development of a practical speech neuroprosthesis for the individuals with speech impairments, better understanding and modeling of imagined speech processes are required. The present study uses intracranial brain recordings from participants that performed a speaking task with trials consisting of overt, mouthed, and imagined speech modes, representing various degrees of decreasing behavioral output. Speech activity detection models are constructed using spatial, spectral, and temporal brain activity features, and the features and model performances are characterized and compared across the three degrees of behavioral output. The results indicate the existence of a hierarchy in which the relevant channels for the lower behavioral output modes form nested subsets of the relevant channels from the higher behavioral output modes. This provides important insights for the elusive goal of developing more effective imagined speech decoding models with respect to the better-established overt speech decoding counterparts.
Original languageEnglish
Article number119913
Number of pages15
JournalNeuroimage
Volume269
Issue number1
DOIs
Publication statusPublished - 1 Apr 2023

Keywords

  • Stereotactic electroencephalography (sEEG)
  • Brain-computer interface (BCI)
  • Speech decoding
  • Imagined speech
  • Speech activity detection
  • MOTOR

Cite this