NIPS Proceedingsβ

Do Deep Nets Really Need to be Deep?

Part of: Advances in Neural Information Processing Systems 27 (NIPS 2014)

[PDF] [BibTeX] [Reviews]

Authors

Conference Event Type: Poster

Abstract

Currently, deep neural networks are the state of the art on problems such as speech recognition and computer vision. In this paper we empirically demonstrate that shallow feed-forward nets can learn the complex functions previously learned by deep nets and achieve accuracies previously only achievable with deep models. Moreover, in some cases the shallow nets can learn these deep functions using the same number of parameters as the original deep models. On the TIMIT phoneme recognition and CIFAR-10 image recognition tasks, shallow nets can be trained that perform similarly to complex, well-engineered, deeper convolutional models.