Abstract
Training a deep convolution neural network (CNN) to succeed in visual object classification usually requires a great number of examples. Here, starting from such a pre-learned CNN, we study the task of extending the network to classify additional categories on the basis of only few examples ("fewshot learning''). We find that a simple and fast prototype-based learning procedure in the global feature layers ("Global Prototype Learning'', GPL) leads to some remarkably good classification results for a large portion of the new classes. It requires only up to ten examples for the new classes to reach a plateau in performance. To understand this few-shot learning performance resulting from GPL as well as the performance of the original network, we use the t-SNE method (Maaten and Hinton, 2008) to visualize clusters of object category examples. This reveals the strong connection between classification performance and data distribution and explains why some new categories only need few examples for learning while others resist good classification results even when trained with many more examples. (C) 2017 Elsevier Ltd. All rights reserved.
Original language | English |
---|---|
Pages (from-to) | 159-172 |
Number of pages | 14 |
Journal | Neural Networks |
Volume | 94 |
DOIs | |
Publication status | Published - 1 Oct 2017 |
Externally published | Yes |
Keywords
- Convolutional Neural Networks
- Object Recognition
- Deep Learning
- Few-Shot Learning
- Transfer Learning