Associative memory in realistic neuronal networks

Part of Advances in Neural Information Processing Systems 14 (NIPS 2001)

Bibtex Metadata Paper

Authors

Peter Latham

Abstract

Almost two decades ago, Hopfield [1] showed that networks of highly reduced model neurons can exhibit multiple attracting fixed points, thus providing a substrate for associative memory. It is still not clear, however, whether realistic neuronal networks can support multiple attractors. The main difficulty is that neuronal networks in vivo exhibit a stable background state at low firing rate, typ(cid:173) ically a few Hz. Embedding attractor is easy; doing so without destabilizing the background is not. Previous work [2, 3] focused on the sparse coding limit, in which a vanishingly small number of neurons are involved in any memory. Here we investigate the case in which the number of neurons involved in a memory scales with the number of neurons in the network. In contrast to the sparse coding limit, we find that multiple attractors can co-exist robustly with a stable background state. Mean field theory is used to under(cid:173) stand how the behavior of the network scales with its parameters, and simulations with analog neurons are presented.

One of the most important features of the nervous system is its ability to perform associative memory. It is generally believed that associative memory is implemented using attractor networks - experimental studies point in that direction [4- 7], and there are virtually no competing theoretical models. Perhaps surprisingly, however, it is still an open theoretical question whether attractors can exist in realistic neu(cid:173) ronal networks. The "realistic" feature that is probably hardest to capture is the steady firing at low rates - the background state - that is observed throughout the intact nervous system [8- 13]. The reason it is difficult to build an attractor network that is stable at low firing rates, at least in the sparse coding limit, is as follows [2,3]:

Attractor networks are constructed by strengthening recurrent connections among sub-populations of neurons. The strengthening must be large enough that neurons within a sub-population can sustain a high firing rate state, but not so large that the sub-population can be spontaneously active. This implies that the neuronal gain functions - the firing rate of the post-synaptic neurons as a function of the average

• http) / culture.neurobio.ucla.edu/ "'pel

firing rate of the pre-synaptic neurons - must be sigmoidal: small at low firing rate to provide stability, high at intermediate firing rate to provide a threshold (at an unstable equilibrium), and low again at high firing rate to provide saturation and a stable attractor. In other words, a requirement for the co-existence of a stable background state and multiple attractors is that the gain function of the excitatory neurons be super linear at the observed background rates of a few Hz [2,3]. However - and this is where the problem lies - above a few Hz most realistic gain function are nearly linear or sublinear (see, for example, Fig. Bl of [14]). The superlinearity requirement rests on the implicit assumption that the activity of the sub-population involved in a memory does not affect the other neurons in the network. While this assumption is valid in the sparse coding limit , it breaks down in realistic networks containing both excitatory and inhibitory neurons. In such networks, activity among excitatory cells results in inhibitory feedback. This feedback, if powerful enough, can stabilize attractors even without a saturating nonlinearity, essentially by stabilizing the equilibrium (above considered unstable) on the steep part of the gain function. The price one pays, though, is that a reasonable fraction of the neurons must be involved in each of the memories, which takes us away from the sparse coding limit and thus reduces network capacity [15].