Running Large-Scale Simulations on the Neurorobotics Platform to Understand Vision: The Case of Visual Crowding

Alban Bornet*, Jacques Kaiser, Alexander Kroner, Egidio Falotico, Alessandro Ambrosano, Kepa Cantero, Michael H. Herzog, Gregory Francis

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Traditionally, human vision research has focused on specific paradigms and proposed models to explain very specific properties of visual perception. However, the complexity and scope of modern psychophysical paradigms undermine the success of this approach. For example, perception of an element strongly deteriorates when neighboring elements are presented in addition (visual crowding). As it was shown recently, the magnitude of deterioration depends not only on the directly neighboring elements but on almost all elements and their specific configuration. Hence, to fully explain human visual perception, one needs to take large parts of the visual field into account and combine all the aspects of vision that become relevant at such scale. These efforts require sophisticated and collaborative modeling. The Neurorobotics Platform (NRP) of the Human Brain Project offers a unique opportunity to connect models of all sorts of visual functions, even those developed by different research groups, into a coherently functioning system. Here, we describe how we used the NRP to connect and simulate a segmentation model, a retina model, and a saliency model to explain complex results about visual perception. The combination of models highlights the versatility of the NRP and provides novel explanations for inward-outward anisotropy in visual crowding.
Original languageEnglish
Article number33
Number of pages14
JournalFrontiers in Neurorobotics
Volume13
DOIs
Publication statusPublished - 29 May 2019

Keywords

  • visual crowding
  • neurorobotics
  • modeling
  • large-scale simulation
  • vision
  • ATTENTION
  • MODEL

Cite this