Comparing active vision models

G. C. H. E. de Croon*, I. G. Sprinkhuizen-Kuyper, E. O. Postma

*Corresponding author for this work

    Research output: Contribution to journalArticleAcademicpeer-review

    Abstract

    Active vision models can simplify visual tasks, provided that they can select sensible actions given incoming sensory inputs. Many active vision models have been proposed, but a comparative evaluation of these models is lacking. We present a comparison of active vision models from two different approaches. The "probabilistic approach" is an approach in which state estimation is the central goal. The "behavioural approach" is an approach that does not divide the vision process in a state estimation and an acting phase. We identify different types of models of the probabilistic approach, and introduce a model inspired on the behavioural approach. We describe these types of models in a common framework and evaluate their performances on a task of viewpoint selection for the classification of three-dimensional objects. The experimental results reveal how the performances of the active vision models relate to each other. For example, the behavioural model performs as good as the best model from the probabilistic approach. Overall, the experimental results reveal relations between the usefulness of active vision, the number of objects involved in the classification task, and the richness of the visual observations of the models. We conclude that research on active vision should aim at reaching a deeper understanding of these relations by applying active vision models to more complex and real-world tasks. (C) 2008 Elsevier B.V. All rights reserved.

    Original languageEnglish
    Pages (from-to)374-384
    Number of pages11
    JournalImage and Vision Computing
    Volume27
    Issue number4
    DOIs
    Publication statusPublished - 3 Mar 2009

    Keywords

    • Active vision
    • Probabilistic approach
    • Behavioural approach
    • OBJECT RECOGNITION
    • APPEARANCE
    • SELECTION

    Cite this