Deep neural-kernel blocks

Siamak Mehrkanoon*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

This paper introduces novel deep architectures using the hybrid neural-kernel core model as the first building block. The proposed models follow a combination of a neural networks based architecture and a kernel based model enriched with pooling layers. In particular, in this context three kernel blocks with average, maxout and convolutional pooling layers are introduced and examined. We start with a simple merging layer which averages the output of the previous representation layers. The maxout layer on the other hand triggers competition among different representations of the input. Thanks to this pooling layer, not only the dimensionality of the output of multi-scale representations is reduced but also multiple sub-networks are formed within the same model. In the same context, the pointwise convolutional layer is also employed with the aim of projecting the multi-scale representations onto a new space. Experimental results show an improvement over the core deep hybrid model as well as kernel based models on several real-life datasets. (C) 2019 Elsevier Ltd. All rights reserved.

Original languageEnglish
Pages (from-to)46-55
Number of pages10
JournalNeural Networks
Volume116
DOIs
Publication statusPublished - Aug 2019

Keywords

  • Deep learning
  • Neural networks
  • Kernel methods
  • Pooling layer
  • Competitive learning
  • Dimensionality reduction
  • NETWORK

Cite this