Competitive Anti-Hebbian Learning of Invariants

Part of Advances in Neural Information Processing Systems 4 (NIPS 1991)

Bibtex Metadata Paper

Authors

Nicol Schraudolph, Terrence J. Sejnowski

Abstract

Although the detection of invariant structure in a given set of input patterns is vital to many recognition tasks, connectionist learning rules tend to focus on directions of high variance (principal components). The prediction paradigm is often used to reconcile this dichotomy; here we suggest a more direct approach to invariant learning based on an anti-Hebbian learning rule. An unsupervised tWO-layer network implementing this method in a competitive setting learns to extract coherent depth information from random-dot stereograms.

1

INTRODUCTION: LEARNING INVARIANT STRUCTURE

Many connectionist learning algorithms share with principal component analysis (Jolliffe, 1986) the strategy of extracting the directions of highest variance from the input. A single Hebbian neuron, for instance, will come to encode the input's first principal component (Oja and Karhunen, 1985); various forms of lateral interaction can be used to force a layer of such nodes to differentiate and span the principal component subspace - cf. (Sanger, 1989; Kung, 1990; Leen, 1991), and others. The same type of representation also develops in the hidden layer of backpropagation autoassociator networks (Baldi and Hornik, 1989).

as the case may be -

However, the directions of highest variance need not always be those that yield the most information, or - the information we are interested in (Intrator, 1991). In fact, it is sometimes desirable to extract the invariant structure of a stimulus instead, learning to encode those aspects that vary the least. The problem, then, is how to achieve this within a connectionist framework that is so closely tied to the maximization of variance.