Discovering Discrete Distributed Representations with Iterative Competitive Learning

Part of Advances in Neural Information Processing Systems 3 (NIPS 1990)

Bibtex Metadata Paper

Authors

Michael C. Mozer

Abstract

Competitive learning is an unsupervised algorithm that classifies input pat(cid:173) terns into mutually exclusive clusters. In a neural net framework, each clus(cid:173) ter is represented by a processing unit that competes with others in a winner(cid:173) take-all pool for an input pattern. I present a simple extension to the algo(cid:173) rithm that allows it to construct discrete, distributed representations. Discrete representations are useful because they are relatively easy to analyze and their information content can readily be measured. Distributed representa(cid:173) tions are useful because they explicitly encode similarity. The basic idea is to apply competitive learning iteratively to an input pattern, and after each stage to subtract from the input pattern the component that was captured in the representation at that stage. This component is simply the weight vector of the winning unit of the competitive pool. The subtraction procedure forces competitive pools at different stages to encode different aspects of the input. The algorithm is essentially the same as a traditional data compression tech(cid:173) nique known as multistep vector quantization, although the neural net per(cid:173) spective suggests potentially powerful extensions to that approach.