Cortical processing of pitch: Model-based encoding and decoding of auditory fMRI responses to real-life sounds

Vittoria De Angelis, Federico De Martino, Michelle Moerel, Roberta Santoro, Lars Hausfeld, Elia Formisano*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

138 Downloads (Pure)

Abstract

Pitch is a perceptual attribute related to the fundamental frequency (or periodicity) of a sound. So far, the cortical processing of pitch has been investigated mostly using synthetic sounds. However, the complex harmonic structure of natural sounds may require different mechanisms for the extraction and analysis of pitch. This study investigated the neural representation of pitch in human auditory cortex using model-based encoding and decoding analyses of high field (7 T) functional magnetic resonance imaging (fMRI) data collected while participants listened to a wide range of real-life sounds. Specifically, we modeled the fMRI responses as a function of the sounds' perceived pitch height and salience (related to the fundamental frequency and the harmonic structure respectively), which we estimated with a computational algorithm of pitch extraction (de Cheveigné and Kawahara, 2002). First, using single-voxel fMRI encoding, we identified a pitch-coding region in the antero-lateral Heschl's gyrus (HG) and adjacent superior temporal gyrus (STG). In these regions, the pitch representation model combining height and salience predicted the fMRI responses comparatively better than other models of acoustic processing and, in the right hemisphere, better than pitch representations based on height/salience alone. Second, we assessed with model-based decoding that multi-voxel response patterns of the identified regions are more informative of perceived pitch than the remainder of the auditory cortex. Further multivariate analyses showed that complementing a multi-resolution spectro-temporal sound representation with pitch produces a small but significant improvement to the decoding of complex sounds from fMRI response patterns. In sum, this work extends model-based fMRI encoding and decoding methods - previously employed to examine the representation and processing of acoustic sound features in the human auditory system - to the representation and processing of a relevant perceptual attribute such as pitch. Taken together, the results of our model-based encoding and decoding analyses indicated that the pitch of complex real life sounds is extracted and processed in lateral HG/STG regions, at locations consistent with those indicated in several previous fMRI studies using synthetic sounds. Within these regions, pitch-related sound representations reflect the modulatory combination of height and the salience of the pitch percept.

Original languageEnglish
Pages (from-to)291-300
Number of pages10
JournalNeuroimage
Volume180
Issue numberPART A
Early online date13 Nov 2017
DOIs
Publication statusPublished - 15 Oct 2018

Keywords

  • Auditory cortex
  • Pitch processing
  • Real-life sounds
  • fMRI
  • Decoding
  • Encoding
  • ITERATED RIPPLED NOISE
  • HUMAN BRAIN ACTIVITY
  • FUNDAMENTAL-FREQUENCY
  • NATURAL SOUNDS
  • COMPLEX SOUNDS
  • CORTEX
  • REPRESENTATION
  • PERCEPTION
  • ORGANIZATION
  • SENSITIVITY
  • Evoked Potentials, Auditory/physiology
  • Humans
  • Magnetic Resonance Imaging/methods
  • Male
  • Pitch Perception/physiology
  • Image Processing, Computer-Assisted/methods
  • Adult
  • Female
  • Acoustic Stimulation
  • Brain Mapping/methods
  • Auditory Cortex/physiology
  • Models, Neurological

Cite this