Contextual encoder-decoder network for visual saliency prediction

Alexander Kroner*, Mario Senden, Kurt Driessens, Rainer Goebel

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Predicting salient regions in natural images requires the detection of objects that are present in a scene. To develop robust representations for this challenging task, high-level visual features at multiple spatial scales must be extracted and augmented with contextual information. However, existing models aimed at explaining human fixation maps do not incorporate such a mechanism explicitly. Here we propose an approach based on a convolutional neural network pre-trained on a large-scale image classification task. The architecture forms an encoder-decoder structure and includes a module with multiple convolutional layers at different dilation rates to capture multi-scale features in parallel. Moreover, we combine the resulting representations with global scene information for accurately predicting visual saliency. Our model achieves competitive and consistent results across multiple evaluation metrics on two public saliency benchmarks and we demonstrate the effectiveness of the suggested approach on five datasets and selected examples. Compared to state of the art approaches, the network is based on a lightweight image classification backbone and hence presents a suitable choice for applications with limited computational resources, such as (virtual) robotic systems, to estimate human fixations across complex natural scenes. Our TensorFlow implementation is openly available at https://github.com/alexanderkroner/saliency.

Original languageEnglish
Pages (from-to)261-270
Number of pages10
JournalNeural Networks
Volume129
Early online date8 May 2020
DOIs
Publication statusPublished - Sept 2020

Keywords

  • ATTENTION
  • Computer vision
  • Convolutional neural networks
  • Deep learning
  • Human fixations
  • INFORMATION
  • INTEGRATION
  • Saliency prediction

Fingerprint

Dive into the research topics of 'Contextual encoder-decoder network for visual saliency prediction'. Together they form a unique fingerprint.

Cite this