Abstract
Timbre, or sound quality, is a crucial but poorly understood dimension of auditory perception that is important in describing speech, music, and environmental sounds. The present study investigates the cortical representation of different timbral dimensions. Encoding models have typically incorporated the physical characteristics of sounds as features when attempting to understand their neural representation with functional MRI. Here we test an encoding model that is based on five subjectively derived dimensions of timbre to predict cortical responses to natural orchestral sounds. Results show that this timbre model can outperform other models based on spectral characteristics, and can perform as well as a complex joint spectrotemporal modulation model. In cortical regions at the medial border of Heschl's gyrus, bilaterally, and regions at its posterior adjacency in the right hemisphere, the timbre model outperforms even the complex joint spectrotemporal modulation model. These findings suggest that the responses of cortical neuronal populations in auditory cortex may reflect the encoding of perceptual timbre dimensions.
Original language | English |
---|---|
Pages (from-to) | 60-70 |
Number of pages | 11 |
Journal | Neuroimage |
Volume | 166 |
DOIs | |
Publication status | Published - 1 Feb 2018 |
Keywords
- Auditory cortex
- Encoding models
- Music
- Perception
- Timbre
- HUMAN BRAIN ACTIVITY
- SOUNDS
- PITCH
- REPRESENTATION
- SENSITIVITY
- TONOTOPY
- IMAGES
- TONES
- Auditory Perception/physiology
- Humans
- Male
- Young Adult
- Magnetic Resonance Imaging
- Auditory Cortex/diagnostic imaging
- Adult
- Female
- Functional Neuroimaging/methods