A Generative Model for Attractor Dynamics

Part of Advances in Neural Information Processing Systems 12 (NIPS 1999)

Bibtex Metadata Paper

Authors

Richard Zemel, Michael C. Mozer

Abstract

Attractor networks, which map an input space to a discrete out(cid:173) put space, are useful for pattern completion. However, designing a net to have a given set of attractors is notoriously tricky; training procedures are CPU intensive and often produce spurious afuac(cid:173) tors and ill-conditioned attractor basins. These difficulties occur because each connection in the network participates in the encod(cid:173) ing of multiple attractors. We describe an alternative formulation of attractor networks in which the encoding of knowledge is local, not distributed. Although localist attractor networks have similar dynamics to their distributed counterparts, they are much easier to work with and interpret. We propose a statistical formulation of localist attract or net dynamics, which yields a convergence proof and a mathematical interpretation of model parameters.

Attractor networks map an input space, usually continuous, to a sparse output space composed of a discrete set of alternatives. Attractor networks have a long history in neural network research. Attractor networks are often used for pattern completion, which involves filling in missing, noisy, or incorrect features in an input pattern. The initial state of the attractor net is typically determined by the input pattern. Over time, the state is drawn to one of a predefined set of states-the attractors. Attractor net dynam(cid:173) ics can be described by a state trajectory (Figure 1a). An attractor net is generally implemented by a set of visible units whose activity represents the instantaneous state, and optionally, a set of hidden units that assist in the computation. Attractor dynamics arise from interactions among the units. In most formulations of afuac(cid:173) tor nets,2,3 the dynamics can be characterized by gradient descent in an energy landscape, allowing one to partition the output space into attractor basins. Instead of homogeneous attractor basins, it is often desirable to sculpt basins that depend on the recent history of the network and the arrangement of attractors in the space. In psychological models of human cognition, for example, priming is fundamental: after the model visits an attractor, it should be faster to fall into the same attractor in the near future, i.e., the attractor basin should be broadened. 1 ,6 Another property of attractor nets is key to explaining behavioral data in psycho(cid:173) logical and neurobiological models: the gang effect, in which the strength of an attractor is influenced by other attractors in its neighborhood. Figure 1b illustrates the gang effect: the proximity of the two rightmost afuactors creates a deeper at(cid:173) tractor basin, so that if the input starts at the origin it will get pulled to the right.

A Generative Model for Attractor Dynamics

81