{"title": "Associative Memory in a Network of `Biological' Neurons", "book": "Advances in Neural Information Processing Systems", "page_first": 84, "page_last": 90, "abstract": null, "full_text": "Associative Memory in a Network of 'biological' \n\nNeurons \n\n\\Vulfram Gerstner \u2022 \nDepartment of Physics \nUniversity of California \nBer keley, CA 94720 \n\nAbstract \n\nThe Hopfield network (Hopfield, 1982,1984) provides a simple model of an \nassociative memory in a neuronal structure. This model, however, is based \non highly artificial assumptions, especially the use of formal-two state neu(cid:173)\nrons (Hopfield, 1982) or graded-response neurons (Hopfield, 1984). \\Vhat \nhappens if we replace the formal neurons by 'real' biological neurons? \\Ve \naddress this question in two steps. First, we show that a simple model of \na neuron can capture all relevant features of neuron spiking, i. e., a wide \nrange of spiking frequencies and a realistic distribution of interspike inter(cid:173)\nvals. Second, we construct an associative memory by linking these neurons \ntogether. The analytical solution for a large and fully connected network \nshows that the Hopfield solution is valid only for neurons with a short re(cid:173)\nfractory period. If the refractory period is longer than a crit.ical duration \nie, the solutions are qualitatively different. The associative character of \nthe solutions, however, is preserved. \n\n1 \n\nINTRODUCTION \n\nInformation received at the sensory level is encoded in spike trains which are then \ntransmitted to different parts of the brain where the main processing steps occur. \nSince all the spikes of any particular neuron look alike, the information of the spike \ntrain is obviously not contained in the exact shape of the spikes, but rather in \ntheir arrival times and in the correlations between the spikes. A model neuron \nwhich tries to keep track of the voltage trace even during the spiking-like the \n\n\u00b7present address: Physik-Department der TU Muenchen, Institut fuer Theoretische \n\nPhysik,D-8046 Garching bei Muenchen \n\n84 \n\n\fAssociative Memory in a Network of 'Biological' Neurons \n\n85 \n\nHodgkin Huxley equations (Hodgkin, 1952) and similar models-carries therefore \nnon-essential details, if we are only interested in the information of the spike train. \nOn the other hand, a simple two-state neuron or threshold model is too simplistic \nsince it cannot reproduce the variety of spiking behaviour found in real neurons. The \nsame is true for continuous or analog model neurons which disregard the stochastic \nnature of neuron firing completely. In this work we construct a model of the neuron \nwhich is intermediate between these extremes. Vve are not concerned with the shape \nof the spikes and detailed voltage traces, but we want realistic interval distributions \nand rate functions. Finally, we link these neurons together to capture collective \neffects and we construct a network that can function as an associative memory. \n\n2 THE MODEL NEURON \n\nFrom a neural-network point of view it is often convenient to consider a neuron \nas a simple computational unit with no internal parameters. \nIn this case, the \nneuron is described either as a 'digital' theshold unit or as a nonlinear 'analog' \nelement with a sigmoid input-output relation. \\Vhile such a simple model might be \nuseful for formal considerations in abstract networks, it is hard to see how it could \nbe modified to include realistic features of neurons: How can we account for the \nstatistical properties of the spike train beyond the mean firing frequencies? What \nabout bursting or oscillating neurons? - to mention but a few of the problems with \nreal neurons. \n\n\\Ve would like to use a model neuron which is closer to biology in the sense that it \nproduces spike trains comparable of those in real neurons. Our description of the \nspiking dynamics therefore emphasizes three basic notions of neurobiology: thresh(cid:173)\nold, refractory period, and noise. In particular we describe the internal state of the \nneuron by the membrane voltage h which depends on the synaptic contributions \nfrom other neurons as well as on the spiking history of the neuron itself. In a simple \nthreshold crossing process, a spike would be initiated as soon as the voltage h(t) \ncrosses the threshold (). Due to the statistical fluctuations of the momentary voltage \naround h(t), however, the spiking will be a statistical event, the spikes coming a \nbit too early or a bit too late compared to the formal threshold crossing time, de(cid:173)\npending on the direction of the fluctuations . This fact will be taken into account by \nintroducing a probabilistic spiking rate r, which depends on the difference between \nthe membrane voltage h and the threshold () in an exponential fashion: \n\n1 \n\nr = - exp[,B(h - (})], \n\n7'0 \n\n(1) \n\nwhere the formal temperature (3-1 is a measure for the noise and 7'0 is an internal \ntime constant of the neuron. If h changes only slowly during a conveniently cho(cid:173)\nsen time 7'1, we can integrate over 7'1, which yields the probability PF(h) of firing \nduring a time step of length 7'1. This gives us an analytic procedure to switch from \ncontinuous time to the discrete time step representation used later on. \n\nIf a spike is initiated in a real neuron, the neuron goes through a cycle of ion influx \nand efflux which changes the potential on a fast time scale and prevents immediate \nfiring of another spike. To model this we reset the potential after each spike by \n\n\f86 \n\nGerstner \n\nadding a negative refractory field hr(t) to the potential: \n\nwith \n\nh(t) = h\u00b7(t) + hr(t), \nhr(t) = Lcr(t -ti), \n\ni \n\n(2) \n\n(3) \n\nwhere ti is the time of the ith spike and h'(t) is the postsynaptic potential due \nto incoming spikes from other neurons. The form of the refractory function Cr(T) \ntogether with the noise level {3 determine the firing characteristics of the neuron. \n\\Vith fairly simple refractory fields we can achieve a sigmoid dependence of the \nfiring frequency upon the input current (figure 1) and realistic spiking statistics \n(figure 3). \n\nStandard Neuron \n\n200.0 \n\nf-I - plot \n\n150.0 \n\n... \nJ: \n.S \nu c: .. :;) \n>. 100.0 \n\nf7 \n~ \n\n50.0 \n\n0.0 \n\n-10.0 \n\n-5.0 \n\n0.0 \n\ninput \n\n5.0 \n\n10.0 \n\nFigure 1: f-I-plot (frequency versus input current) for a standard neuron with \nabsolute and relative refractory period. The absolute refractory period lasts for \na = 5ms followed by an exponentially decaying relative refractory function (time \nconstant 2ms). The refractory function is shown in figure 2. \n\nStandard Neuron \n\n0.0 r------..---==--..:.---,..--------, \n\nrefroctory function \n\n-'20.0 \n\n-40.0 \n\n-60.0 \n\n-80.0 \n\n-100.0 L..-_---1. __ --'-_______ \"'--____ --J \n\n0.0 \n\n10.0 \n\n20.0 \n\n30.0 \n\nlime in ms \n\nFigure 2: Refractory function of the model used in figure 1. \n\nIndeed, the interval distribution changes from an approximate Poisson distribution \nfor driving currents below threshold to an approximate Gaussian distribution above \n\n\fAssociative Memory in a Network of 'Biological' Neurons \n\n87 \n\nthreshold. Different forms of the refractory function can lead to bursting behavior \nor to model neurons with adaptive behavior. \nIn figure 4 we show a bursting neuron defined by a long-tailed refractory function \nwith a slight overshooting at intermediate time delays. At low input level, the bursts \nare noise induced and appear in irregular intervals. For larger driving currents the \nspiking changes to regular bursting. Even a model with a simple absolute refractory \nperiod \n\nhas many interesting features. The explicit solution for a network of these neurons \nis given in the following sections. \n\n(4) \n\nStandard Neuron \nspikelroin, inpul - +/-0 \n\n0.40 ,.-------.--\n\n0.0 \n\n100.0 \n\n200.0 \n\n300.0 \n\n\u00b7 400.0 \n\n500.0 \n\n0.30 \n\n~ :a \n.& o \n.. \nIi. 0.20 \n~ o \n~ \n\n0.10 \n\nlime in ms \n\nStandard Neuron \nspikelroin. inpul - -2 \n\n0.0 \n\n100.0 \n\n200.0 \n\n300.0 \n\n400.0 \n\n500.0 \n\nlime in ms \n\n0.00 L __ ---\"lCL:=:===:r:======----'---------l \n300.0 \n\n200.0 \n\n0.0 \n\n100.0 \n\nlime in ms \n\nFigure 3: Spike trains and Interval distributions for the model of figure I at two \ndifferent input levels. \n\n\f88 \n\nGerstner \n\n3 THE NETWORK \n\nSo far we have only described the dynamics which initiates the spikes in the neurons. \nNow we have to describe the spikes themselves and their synaptic transmission to \nother neurons. To keep track of the spikes we assign to each neuron a two state \nvariable Sj which usually rests at -1 and flips to +1 only when a spike is initiated. \nIn the discrete time step representation that we assume in the following the output \nof each neuron is then described by a sequence of Ising spins Sj{t n ). \n\nBursting Neuron \n-1 \nspiketroin. input -\n\n200.0 \n\ntime in ms \n\nBursting Neuron \nspiketroin. input - -2 \n\n200.0 \n\ntime in ms \n\n300.0 \n\n300.0 \n\n100.0 \n\n100.0 \n\nFigure 4: Spike trains for a bursting neuron. At low input level the bursts are noise \ninduced and appear in irregular intervals, at high input level the bursting is regular. \n\nIn a network of neurons, neuron i may recieve a spike from neuron j via the synaptic \nconnection, and the spike will evoke a postsynaptic potential at i. The strength of \nthis response will depend on the synaptic efficacy Jii' The time course of this \nresponse, however, can be taken to have a generic form independent of the strength \nof the synapse. We formalize these ideas assuming linearity and write \n\nhi(tn) = L hi L c{Tm)Si(tn - Tm), \n\n(5) \n\ni \n\n'T\", \n\nwhere c( T) might be an experimental response function and Sj is a conveniently \nnormalized variable proportional to Sj. \nFor the synaptic efficacies we assume the Hebbian matrix also taken by Hopfield \n\n1 p \n\nJ .. - ~ elJelJ \nI) - N L...J'i 'i ' \n\n1J=1 \n\n(6) \n\n\fAssociative Memory in a Network of 'Biological' Neurons \n\n89 \n\nwhere the varables ~r = \u00b11, (1 < i < N, 1 ~ J..l < p) describe the p random \npatterns to be stored. We can obtain these synaptic weights by a Hebbian learning \nprocedure. It is now straightforward to incorporate the internal dynamics of the \nneurons, which we described in the preceding section. The refractory field can be \nintroduced as the diagonal elements of the synaptic connection matrix \n\n(7) \n\nIf all the neurons are equivalent, the diagonal elements must be independent of i \nand Jii(T) = (r(T) describes the generic voltage response of our model neuron after \nfiring of a spike. \n\n4 RESULTS \n\n\\Ve can solve this model analytically in the limit of a large and fully connected \nnetwork. The solution depends on an additional parameter p which characterizes \nthe maximum spiking frequency of the neurons. To compare our results with the \nHopfield model, we replace PF(h), calculated from (1), by the generic form ~(1 + \ntanh(J3h\u00bb and we take the case of the simple refractory field (4). In this case the \nparameter p is related to the absolute refractory period by p = 'l'!l. For a large \nmaximum spiking frequency or 'Y -+ 0, we recover the Hopfield solutions. For I \nlarger than a critical value ,e the solutions are qualitatively different: there is a \n\nregime of inverse temperatures in which both the retrieval solution and the trivial \nsolution are stable. This allows the network to remain undecided, if the initial \noverlap with one of the patterns is not large enough. This is in contrast to the \nHopfield model (Hopfield 1982,1984) where the network is always forced into one of \nthe retrieval states. 'Ve compared our analytic solutions with computersimulations \nwhich verified that the calculated stationary solutions are indeed stable states of \nthe network with a wide basin of attraction. Thus the basic associative memory \ncharacteristics of the standard Hopfield model are robust under the replacement of \nthe two state neurons by more biological neurons. \n\n5 CONCLUSIONS \n\n\\\\Te constructed a network of neurons with intrinsic spiking behaviour and realistic \npostsynaptic response. In addition to the standard solutions we have undecided \nnetwork states which might have a biological significance in the process of decision \nmaking. There remain of course a number of unbiological features in the network, \ne.g. the assumption of full connectivity, the symmetry of the connections and the \nlinearity of the learning rule. But most of these assumptions can be overcome at \nleast in principle (see e.g. Amit 1989 for references). Our results confirm the general \nrobustness of attractor neural networks to biological modifications, but they suggest \nthat including more biological details also adds interesting features to the variety \nof states available to the network. \n\n\f90 \n\nGerstner \n\n1.0 \n\nC.\u00a3 \n\nn \n\n.. \nt , \n\n0.' \n\nt.2 \n\nC.o \n\nC.2 \n\noverlop CS 0 function of temperoture \n\nrcl'KlOfy period ~Dm\"\" _1 \n\n,.f,oC'tOI')' period 90\"'- -1% \n\n1.0 \n\nc.& \n\nM \n\nC.' \n\nC\" \n\n0.4 \n\n0.6 \n\nte\"\"PC, D\\Uf. \n\n0 .. \n\nCC \n\nC.2 \n\nC4 \n\nCE \n\nC.I \n\nFigure 5: Stationary states of the network . Depending on the length of the refrac(cid:173)\ntory period the retrieval behavior varies. Figures a and b show the overlap with \none of the learned patterns for different noise level T = 1/ {3. For a neuron a with \nshort refractory period (figure a) the overlap curve is similar to those of the Hopfield \nmodel. For longer refractory periods (figure b) the curve is qualitatively different, \nshowing a regime of bistability at intermediate noise levels. If the network is work(cid:173)\ning at these noise levels it depends on the initial overlap with the learned pattern \nwhether the network will go to the trivial state with overlap 0 or t.o the retrieval \nstate with large overlap (overlap m = 1 corresponds to perfect retrieval.). \n\nAcknowledgements \n\nI would like to thank \\\\TilIiam Bialek and his students at Berkeley for their generous \nhospitality and numerous stimulating discussions. Thanks also to J .L.\\'anHemmen \nand to Andreas Herz for many helpful comments and advice. I acknowledge the \nfinancial support of the German Academic Exchange Service (DAAD) who made \nmy stay at Berkeley possible. \n\nReferences \n\nHopfield,J.J. (1982), Neural Networks and Physical Systems with Emergent ColIec(cid:173)\ntive Computational Abilities, Proc.Natl.Acad.Sci USA 79,2554-2558. \nHopfield,J.J. (1984), Neurons with Graded Response have Collective Computational \nProperties like those of Two-State-Neurons, Proc.Natl.Acad.Sci USA 81, 3088-3092. \n\nHodgkin,A.L. and Huxley,A.F. (1952) A Quantitative Description of Membrane \nCurrent and its Application to Conduction and Excitation in Nerve, J .Physiology \n117,500-544. \nAmit,D.J., (1989) Modeling Brain Function: The \\\\Torld of Attractor Neural Net(cid:173)\nworks, CH.7. Cambridge University Press. \n\n\f", "award": [], "sourceid": 371, "authors": [{"given_name": "Wulfram", "family_name": "Gerstner", "institution": null}]}