Reconstructing the spectrotemporal modulations of real-life sounds from fMRI response patterns

Roberta Santoro, Michelle Moerel, Federico De Martino, Giancarlo Valente, Kamil Ugurbil, Essa Yacoub, Elia Formisano*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

91 Downloads (Pure)

Abstract

Ethological views of brain functioning suggest that sound representations and computations in the auditory neural system are optimized finely to process and discriminate behaviorally relevant acoustic features and sounds (e.g., spectrotemporal modulations in the songs of zebra finches). Here, we show that modeling of neural sound representations in terms of frequency-specific spectrotemporal modulations enables accurate and specific reconstruction of real-life sounds from high-resolution functional magnetic resonance imaging (fMRI) response patterns in the human auditory cortex. Region-based analyses indicated that response patterns in separate portions of the auditory cortex are informative of distinctive sets of spectrotemporal modulations. Most relevantly, results revealed that in early auditory regions, and progressively more in surrounding regions, temporal modulations in a range relevant for speech analysis (similar to 2-4 Hz) were reconstructed more faithfully than other temporal modulations. In early auditory regions, this effect was frequency-dependent and only present for lower frequencies (
Original languageEnglish
Pages (from-to)4799-4804
JournalProceedings of the National Academy of Sciences of the United States of America
Volume114
Issue number18
DOIs
Publication statusPublished - 2 May 2017

Keywords

  • auditory cortex
  • functional MRI
  • natural sounds
  • model-based decoding
  • spectrotemporal modulations

Cite this