Modeling invariant object processing based on tight integration of simulated and empirical data in a Common Brain Space

Research output: Contribution to journalArticleAcademicpeer-review

3 Citations (Scopus)

Abstract

Recent advances in Computer Vision and Experimental Neuroscience provided insights into mechanisms underlying invariant object recognition. However, due to the different research aims in both fields models tended to evolve independently. A tighter integration between computational and empirical work may contribute to cross-fertilized development of (neurobiologically plausible) computational models and computationally defined empirical theories, which can be incrementally merged into a comprehensive brain model. After reviewing theoretical and empirical work on invariant object perception, this article proposes a novel framework in which neural network activity and measured neuroimaging data are interfaced in a common representational space. This enables direct quantitative comparisons between predicted and observed activity patterns with in and across multiple stages of object processing, which may help to clarify how high-order invariant representations are created from low level features. Given the advent of columnar-level imaging with high-resolution fMRI, it is time to capitalize on this new window into the brain and test which predictions of the various object recognition models are supported by this novel empirical evidence.
Original languageEnglish
Article number12
Pages (from-to)1-7
Number of pages7
JournalFrontiers in Computational Neuroscience
Volume6
DOIs
Publication statusPublished - 9 Mar 2012

Keywords

  • object perception
  • view-invariant object recognition
  • neuroimaging
  • large-scale neuromodeling
  • (high-field) fMRI
  • multimodal data integration
  • INFERIOR TEMPORAL CORTEX
  • VISUAL-CORTEX
  • TOP-DOWN
  • ORIENTATION-DEPENDENCE
  • 3-DIMENSIONAL OBJECTS
  • SELECTIVE ATTENTION
  • HIERARCHICAL-MODELS
  • OCCIPITAL CORTEX
  • FMRI DATA
  • RECOGNITION

Cite this

@article{1f533123b92049cba3a88f54dbbd8d16,
title = "Modeling invariant object processing based on tight integration of simulated and empirical data in a Common Brain Space",
abstract = "Recent advances in Computer Vision and Experimental Neuroscience provided insights into mechanisms underlying invariant object recognition. However, due to the different research aims in both fields models tended to evolve independently. A tighter integration between computational and empirical work may contribute to cross-fertilized development of (neurobiologically plausible) computational models and computationally defined empirical theories, which can be incrementally merged into a comprehensive brain model. After reviewing theoretical and empirical work on invariant object perception, this article proposes a novel framework in which neural network activity and measured neuroimaging data are interfaced in a common representational space. This enables direct quantitative comparisons between predicted and observed activity patterns with in and across multiple stages of object processing, which may help to clarify how high-order invariant representations are created from low level features. Given the advent of columnar-level imaging with high-resolution fMRI, it is time to capitalize on this new window into the brain and test which predictions of the various object recognition models are supported by this novel empirical evidence.",
keywords = "object perception, view-invariant object recognition, neuroimaging, large-scale neuromodeling, (high-field) fMRI, multimodal data integration, INFERIOR TEMPORAL CORTEX, VISUAL-CORTEX, TOP-DOWN, ORIENTATION-DEPENDENCE, 3-DIMENSIONAL OBJECTS, SELECTIVE ATTENTION, HIERARCHICAL-MODELS, OCCIPITAL CORTEX, FMRI DATA, RECOGNITION",
author = "J.C. Peters and J. Reithler and R. Goebel",
year = "2012",
month = "3",
day = "9",
doi = "10.3389/fncom.2012.00012",
language = "English",
volume = "6",
pages = "1--7",
journal = "Frontiers in Computational Neuroscience",
issn = "1662-5188",
publisher = "Frontiers Media S.A.",

}

TY - JOUR

T1 - Modeling invariant object processing based on tight integration of simulated and empirical data in a Common Brain Space

AU - Peters, J.C.

AU - Reithler, J.

AU - Goebel, R.

PY - 2012/3/9

Y1 - 2012/3/9

N2 - Recent advances in Computer Vision and Experimental Neuroscience provided insights into mechanisms underlying invariant object recognition. However, due to the different research aims in both fields models tended to evolve independently. A tighter integration between computational and empirical work may contribute to cross-fertilized development of (neurobiologically plausible) computational models and computationally defined empirical theories, which can be incrementally merged into a comprehensive brain model. After reviewing theoretical and empirical work on invariant object perception, this article proposes a novel framework in which neural network activity and measured neuroimaging data are interfaced in a common representational space. This enables direct quantitative comparisons between predicted and observed activity patterns with in and across multiple stages of object processing, which may help to clarify how high-order invariant representations are created from low level features. Given the advent of columnar-level imaging with high-resolution fMRI, it is time to capitalize on this new window into the brain and test which predictions of the various object recognition models are supported by this novel empirical evidence.

AB - Recent advances in Computer Vision and Experimental Neuroscience provided insights into mechanisms underlying invariant object recognition. However, due to the different research aims in both fields models tended to evolve independently. A tighter integration between computational and empirical work may contribute to cross-fertilized development of (neurobiologically plausible) computational models and computationally defined empirical theories, which can be incrementally merged into a comprehensive brain model. After reviewing theoretical and empirical work on invariant object perception, this article proposes a novel framework in which neural network activity and measured neuroimaging data are interfaced in a common representational space. This enables direct quantitative comparisons between predicted and observed activity patterns with in and across multiple stages of object processing, which may help to clarify how high-order invariant representations are created from low level features. Given the advent of columnar-level imaging with high-resolution fMRI, it is time to capitalize on this new window into the brain and test which predictions of the various object recognition models are supported by this novel empirical evidence.

KW - object perception

KW - view-invariant object recognition

KW - neuroimaging

KW - large-scale neuromodeling

KW - (high-field) fMRI

KW - multimodal data integration

KW - INFERIOR TEMPORAL CORTEX

KW - VISUAL-CORTEX

KW - TOP-DOWN

KW - ORIENTATION-DEPENDENCE

KW - 3-DIMENSIONAL OBJECTS

KW - SELECTIVE ATTENTION

KW - HIERARCHICAL-MODELS

KW - OCCIPITAL CORTEX

KW - FMRI DATA

KW - RECOGNITION

U2 - 10.3389/fncom.2012.00012

DO - 10.3389/fncom.2012.00012

M3 - Article

VL - 6

SP - 1

EP - 7

JO - Frontiers in Computational Neuroscience

JF - Frontiers in Computational Neuroscience

SN - 1662-5188

M1 - 12

ER -