Few-shot learning in deep networks through global prototyping

Sebastian Blaes, Thomas Burwick*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Training a deep convolution neural network (CNN) to succeed in visual object classification usually requires a great number of examples. Here, starting from such a pre-learned CNN, we study the task of extending the network to classify additional categories on the basis of only few examples ("fewshot learning''). We find that a simple and fast prototype-based learning procedure in the global feature layers ("Global Prototype Learning'', GPL) leads to some remarkably good classification results for a large portion of the new classes. It requires only up to ten examples for the new classes to reach a plateau in performance. To understand this few-shot learning performance resulting from GPL as well as the performance of the original network, we use the t-SNE method (Maaten and Hinton, 2008) to visualize clusters of object category examples. This reveals the strong connection between classification performance and data distribution and explains why some new categories only need few examples for learning while others resist good classification results even when trained with many more examples. (C) 2017 Elsevier Ltd. All rights reserved.
Original languageEnglish
Pages (from-to)159-172
Number of pages14
JournalNeural Networks
Volume94
DOIs
Publication statusPublished - 1 Oct 2017
Externally publishedYes

Keywords

  • Convolutional Neural Networks
  • Object Recognition
  • Deep Learning
  • Few-Shot Learning
  • Transfer Learning

Fingerprint

Dive into the research topics of 'Few-shot learning in deep networks through global prototyping'. Together they form a unique fingerprint.

Cite this