Modeling invariant object processing based on tight integration of simulated and empirical data in a Common Brain Space

Research output: Contribution to journalArticleAcademicpeer-review

3 Citations (Scopus)

Abstract

Recent advances in Computer Vision and Experimental Neuroscience provided insights into mechanisms underlying invariant object recognition. However, due to the different research aims in both fields models tended to evolve independently. A tighter integration between computational and empirical work may contribute to cross-fertilized development of (neurobiologically plausible) computational models and computationally defined empirical theories, which can be incrementally merged into a comprehensive brain model. After reviewing theoretical and empirical work on invariant object perception, this article proposes a novel framework in which neural network activity and measured neuroimaging data are interfaced in a common representational space. This enables direct quantitative comparisons between predicted and observed activity patterns with in and across multiple stages of object processing, which may help to clarify how high-order invariant representations are created from low level features. Given the advent of columnar-level imaging with high-resolution fMRI, it is time to capitalize on this new window into the brain and test which predictions of the various object recognition models are supported by this novel empirical evidence.
Original languageEnglish
Article number12
Pages (from-to)1-7
Number of pages7
JournalFrontiers in Computational Neuroscience
Volume6
DOIs
Publication statusPublished - 9 Mar 2012

Keywords

  • object perception
  • view-invariant object recognition
  • neuroimaging
  • large-scale neuromodeling
  • (high-field) fMRI
  • multimodal data integration
  • INFERIOR TEMPORAL CORTEX
  • VISUAL-CORTEX
  • TOP-DOWN
  • ORIENTATION-DEPENDENCE
  • 3-DIMENSIONAL OBJECTS
  • SELECTIVE ATTENTION
  • HIERARCHICAL-MODELS
  • OCCIPITAL CORTEX
  • FMRI DATA
  • RECOGNITION

Cite this