Spectro-Temporal Processing in a Two-Stream Computational Model of Auditory Cortex

Isma Zulfiqar*, Michelle Moerel, Elia Formisano

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Neural processing of sounds in the dorsal and ventral streams of the (human) auditory cortex is optimized for analyzing fine-grained temporal and spectral information, respectively. Here we use a Wilson and Cowan firing-rate modeling framework to simulate spectro-temporal processing of sounds in these auditory streams and to investigate the link between neural population activity and behavioral results of psychoacoustic experiments. The proposed model consisted of two core (A1 and R, representing primary areas) and two belt (Slow and Fast, representing rostral and caudal processing respectively) areas, differing in terms of their spectral and temporal response properties. First, we simulated the responses to amplitude modulated (AM) noise and tones. In agreement with electrophysiological results, we observed an area-dependent transition from a temporal (synchronization) to a rate code when moving from low to high modulation rates. Simulated neural responses in a task of amplitude modulation detection suggested that thresholds derived from population responses in core areas closely resembled those of psychoacoustic experiments in human listeners. For tones, simulated modulation threshold functions were found to be dependent on the carrier frequency. Second, we simulated the responses to complex tones with missing fundamental stimuli and found that synchronization of responses in the Fast area accurately encoded pitch, with the strength of synchronization depending on number and order of harmonic components. Finally, using speech stimuli, we showed that the spectral and temporal structure of the speech was reflected in parallel by the modeled areas. The analyses highlighted that the Slow stream coded with high spectral precision the aspects of the speech signal characterized by slow temporal changes (e.g., prosody), while the Fast stream encoded primarily the faster changes (e.g., phonemes, consonants, temporal pitch). Interestingly, the pitch of a speaker was encoded both spatially (i.e., tonotopically) in Slow area and temporally in Fast area. Overall, performed simulations showed that the model is valuable for generating hypotheses on how the different cortical areas/streams may contribute toward behaviorally relevant aspects of auditory processing. The model can be used in combination with physiological models of neurovascular coupling to generate predictions for human functional MRI experiments.

Original languageEnglish
Article number95
Number of pages18
JournalFrontiers in Computational Neuroscience
Volume13
DOIs
Publication statusPublished - 22 Jan 2020

Keywords

  • auditory cortex
  • sound processing
  • dynamic neuronal modeling
  • temporal coding
  • rate coding
  • MODULATION TRANSFER-FUNCTIONS
  • RESPONSE PROPERTIES
  • MATHEMATICAL-THEORY
  • FREQUENCY
  • SPEECH
  • REPRESENTATION
  • ORGANIZATION
  • AMPLITUDE
  • SOUND
  • MECHANISMS

Cite this