An Information Maximization Approach to Overcomplete and Recurrent Representations

Part of Advances in Neural Information Processing Systems 13 (NIPS 2000)

Bibtex Metadata Paper

Authors

Oren Shriki, Haim Sompolinsky, Daniel Lee

Abstract

The principle of maximizing mutual information is applied to learning overcomplete and recurrent representations. The underlying model con(cid:173) sists of a network of input units driving a larger number of output units with recurrent interactions. In the limit of zero noise, the network is de(cid:173) terministic and the mutual information can be related to the entropy of the output units. Maximizing this entropy with respect to both the feed(cid:173) forward connections as well as the recurrent interactions results in simple learning rules for both sets of parameters. The conventional independent components (ICA) learning algorithm can be recovered as a special case where there is an equal number of output units and no recurrent con(cid:173) nections. The application of these new learning rules is illustrated on a simple two-dimensional input example.