Dimensionality Reduction and Prior Knowledge in E-Set Recognition

Part of Advances in Neural Information Processing Systems 2 (NIPS 1989)

Bibtex Metadata Paper


Kevin Lang, Geoffrey E. Hinton


It is well known that when an automatic learning algorithm is applied to a fixed corpus of data, the size of the corpus places an upper bound on the number of degrees of freedom that the model can contain if it is to generalize well. Because the amount of hardware in a neural network typically increases with the dimensionality of its inputs, it can be challenging to build a high-performance network for classifying large input patterns. In this paper, several techniques for addressing this problem are discussed in the context of an isolated word recognition task.