{"title": "How to Describe Neuronal Activity: Spikes, Rates, or Assemblies?", "book": "Advances in Neural Information Processing Systems", "page_first": 463, "page_last": 470, "abstract": null, "full_text": "How to Describe Neuronal Activity: \n\nSpikes, Rates, or Assemblies? \n\nWulfram Gerstner and J. Leo van Hemmen \n\nPhysik-Department der TU Miinchen \n\nD-85748 Garching bei Miinchen, Germany \n\nAbstract \n\nWhat is the 'correct' theoretical description of neuronal activity? \nThe analysis of the dynamics of a globally connected network of \nspiking neurons (the Spike Response Model) shows that a descrip(cid:173)\ntion by mean firing rates is possible only if active neurons fire in(cid:173)\ncoherently. If firing occurs coherently or with spatio-temporal cor(cid:173)\nrelations, the spike structure of the neural code becomes relevant. \nAlternatively, neurons can be gathered into local or distributed en(cid:173)\nsembles or 'assemblies'. A description based on the mean ensemble \nactivity is, in principle, possible but the interaction between differ(cid:173)\nent assemblies becomes highly nonlinear. A description with spikes \nshould therefore be preferred. \n\n1 \n\nINTRODUCTION \n\nNeurons communicate by sequences of short pulses, the so-called action potentials \nor spikes. One of the most important problems in theoretical neuroscience concerns \nthe question of how information on the environment is encoded in such spike trains: \nIs the exact timing of spikes with relation to earlier spikes relevant (spike or interval \ncode (MacKay and McCulloch 1952) or does the mean firing rate averaged over sev(cid:173)\neral spikes contain all important information (rate code; see, e.g., Stein 1967)? Are \nspikes of single neurons important or do we have to consider ensembles of equivalent \nneurons (ensemble code)? If so, can we find local ensembles (e.g., columns; Hubel \nand Wiesel 1962) or do neurons form 'assemblies' (Hebb 1949) distributed all over \nthe network? \n\n463 \n\n\f464 \n\nGerstner and van Hemmen \n\n2 SPIKE RESPONSE MODEL \n\nWe consider a globally connected network of N neurons with 1 ~ i ~ N. A neuron i \nfires, if its membrane potential passes a threshold (). A spike at time t{ is described \nby a 6-pulse; thus Sf (t) = L:~=1 6(t - t{) is the spike train of neuron i. Spikes are \nlabelled such that tt is the most recent spike and tf is the Fth spike going back in \ntime. \nIn the Spike Response Model, short SRM, (Gerstner 1990, Gerstner and van Hem(cid:173)\nmen 1992) a neuron is characterized by two different response junctions, f and \"1ref . \nSpikes which neuron i receives from other neurons evoke a synaptic potential \n\nwhere the response kernel \n\nf(S) = \n\n{ \n\n0 \n,,_a tr \n-::-r- exp - - - lor s > u \n\nfor s < Ll tr \n(,,_a tr ) CA t \nr \n\nT. \n\nT, \n\n(1) \n\n(2) \n\ndescribes a typical excitatory or inhibitory postsynaptic potential; see Fig. 1. The \nweight Jij is the synaptic efficacy of a connection from j to i, Ll tr is the axonal (and \nsynaptic) transmission time, and T\" is a time constant of the postsynaptic neuron. \nThe origin S = 0 in (2) denotes the firing time of a presynaptic spike. In simulations \nwe usually assume T\" = 2 ms and for Lltr a value between 1 and 4 ms \nSimilarly, spike emission induces refractoriness immediately after spiking. This is \nmodelled by a refractory potential \n\nwith a refractory function \n\nref () \n\"1 \n\n{ -00 \n\ns = \"1o/(s _ ,ref) \n\n(3) \n\n(4) \n\nfor S ~ ,ref \nfor S > ,ref. \n\nFor 0 ~ s ~ ,ref the neuron is in the absolute refractory period and cannot spike at \nall whereas for s > ,ref spiking is possible but difficult (relative refractory period). \nTo put it differently, () - \"1ref (s) describes an increased threshold immediately after \nspiking; cf. Fig. 1. In simulations, ,ref is taken to be 4 ms. Note that, for the sake \nof simplicity, we assume that only the most recent spike Sf induces refractoriness \nwhereas all past spikes Sf contribute to the synaptic potential; cf., Eqs. (1) and (3). \n\n\fHow to Describe Neuronal Activity: Spikes, Rates, or Assemblies? \n\n465 \n\n9-n(s) \n\nw f 0.5 \n\nCD \n\nFig 1 Response functions. \nImmediately after firing at 8 = \no the effective threshold is in(cid:173)\ncreased to (J - TIre! (8) (dashed). \nThe form of an excitatory post(cid:173)\nsynaptic potential (EPSP) is \ndescribed by the response func(cid:173)\ntion f( 8) (solid). It is delayed by \na time ~ tr. The arrow denotes \n20.0 the period Tosc of coherent os-\n\ncillations; d. Section 5. \n\n0.0 '-........o...-_Ll------'-_L-..--'---.----l---=::::t=~ \n\n0.0 \n\n5.0 \n\n15.0 \n\n10.0 \n5 [m5] \n\nThe total membrane potential is the sum of both parts, i.e. \n\nhi(t) = h~ef (t) + h:yn(t). \n\n(5) \n\nNoise is included by introduction of a firing probability \n\n(6) \nwhere 6t is an infinitesimal time interval and r(h) is a time constant which depends \non the momentary value of the membrane potential in relation to the threshold (). \nIn analogy to the chemical reaction constant we assume \n\nPF(h; 6t) = r- 1 (h) 6t. \n\nr(h) = ro exp[-,B(h - (})], \n\n(7) \nwhere ro is the response time at threshold. The parameter ,B determines the amount \nof noise in the system. For,B --+ 00 we recover the noise-free behavior, i.e., a neuron \nfires immediately, if h > () (r --+ 0), but it cannot fire, if h < () (r --+ (0). Eqs. (1), \n(3), (5), and (6) define the spiking dynamics in a network of SRM-neurons. \n\n3 FIRING STATISTICS \n\nWe start our considerations with a large ensemble of identical neurons driven by the \nsame arbitrary synaptic potential h3yn (t) . We assume that all neurons have fired a \nfirst spike at t = t{ . Thus the total membrane potential is h(t) = hsyn(t) + 7]re f (t(cid:173)\nto. If h(t) slowly approaches (), some of the neurons will fire again. We now ask \nfor the probability that a neuron which has fired at time t{ will fire again at a later \ntime t. The conditional probability p~2\\tlt{) that the next spike of a given neuron \noccurs at time t > t{ is \n\np~2)(tlt{) = r-l[h(t)] exp { -1; r- 1[h(S')]dS'} . \n\n(8) \n\nThe exponential factor is the portion of neurons that have survived from time t{ to \ntime t without firing again and the prefactor r- 1 [h(t)] is the instantaneous firing \nprobability (6) at time t. Since the refractory potential is reset after each spike, \nthe spiking statistics does not depend on earlier spikes, in other words, it is fully \ndescribed by p~2)(tlt{). This will be used below; cf. Eq. (14) . \n\n\f466 \n\nGerstner and van Hemmen \n\nAs a special case, we may consider constant synaptic input h 3yn = hO\u2022 In this case, \n(8) yields the distribution of inter-spike intervals in a spike train of a neuron driven \nby constant input hO\u2022 The mean firing rate at an input level h O is defined as the \ninverse of the mean inter-spike interval. Integration by parts yields \n\nI[ho] = {J.;dt(t-t{lP~2)(tlt{l} -I = {J.oodsexp{-lT-I[hO+~\"f (s'l]ds'} } -I \n\n(9) \nThus both firing rate and interval distribution can be calculated for arbitrary inputs. \n\n4 ASSEMBLY FORMATION AND NETWORK \n\nDYNAMICS \n\nWe now turn to a large, but structured network. Structure is induced by the \nformation of different assemblies in the system. Each neuronal assembly aP. (Hebb \n1949) consists of neurons which have the tendency to be active at the same time. \nFollowing the traditional interpretation, active means an elevated mean firing rate \nduring some reasonable period of time. Later, in Section 5.3, we will deal with a \ndifferent interpretation where active means a spike within a time window of a few \nms. In any case, the notion of simultaneous activity allows to define an activity \npattern {~r, 1 :::; i :::; N} with ~r = 1 if i E aP. and ~r = 0 otherwise. Each neuron \nmay belong to different assemblies 1 :::; I-l :::; q. The vector ei = (a, ... ,~n is the \n'identity card' of neuron i, e.g., ei = (1,0,0,1,0) says that neuron i belongs to \nassembly 1 and 4 but not to assembly 2,3, and 5. \n\nNote that, in general, there are many neurons with the same identity card. This \ncan be used to define ensembles (or sublattices) L(x) of equivalent neurons, i.e., \nL(x) = {ilei = x} (van Hemmen and Kiihn 1991). In general, the number of \nneurons IL(x)1 in an ensemble L(x) goes to infinity if N \n--;. 00, and we write \nIL(x)1 = p(x)N. The mean activity of an ensemble L(x) can be defined by \n\nA(x, t) = lim \n\nat--+o N--+oo \n\nlim IL(x)I- 1 L \n\niEL(X) \n\nI t+at \n\nt \n\nS[ (t)dt. \n\n(10) \n\nIn the following we assume that the synaptic efficacies have been adjusted according \nto some Hebbian learning rule in a way that allows to stabilize the different activity \npatterns or assemblies ap.. To be specific, we assume \n\nJij = ~ L L Qp.vpost(~r)pre(~j) \n\nJ \n\nq \n\nq \n\np.=lv=l \n\n(11) \n\nwhere post(x) and pre(x) are some arbitrary functions characterizing the pre- and \npostsynaptic part of synaptic learning. Note that for Qp.v = fJp.v and post(x) and \npre(x) linear, Eq. (11) can be reduced to the usual Hebb rule. \nWith the above definitions we can write the synaptic potential of a neuron i E L(x) \nin the following form \n(>0 \nh3yn (x , t) = Jo L L Qp.vpost(xp.) Lpre(zV) 10 \n\nf(s')p(z)A(z, t - s')ds'. (12) \n\nq \n\nq \n\np.=lv=l \n\nz \n\n0 \n\n\fHow to Describe Neuronal Activity: Spikes, Rates, or Assemblies? \n\n467 \n\nWe note that the index i and j has disappeared and there remains a dependence \nupon x and z only. The activity of a typical ensemble is given by (Gerstner and \nvan Hemmen 1993, 1994) \n\nA(x, t) = 100 p?)(tlt - s)A(x, t - s)ds \n\n(13) \n\nwhere \n\np~2)(tlt-s) = r- 1 [h',yn(x, t)+7]ref (s)] exp {-13r- 1 [h3 yn(x, t - s+s' )+7]ref (s')]ds' } \n\n(14) \nis the conditional probability (8) that a neuron i E L(x) which has fired at time \nt-s fires again at time t. Equations (12) - (14) define the ensemble dynamics of the \nnetwork. \n\n5 DISCUSSION \n\n5.1 ENSEMBLE CODE \n\nEquations. (12) - (14) show that in a large network a description by mean ensemble \nactivities is, in principle, possible. A couple of things, however, should be noted. \nFirst, the interaction between the activity of different ensembles is highly nonlinear. \nIt involves three integrations over the past and one exponentiation; cf. (12) - (14). \nIf we had started theoretical modeling with an approach based on mean activities, \nit would have been hard to find the correct interaction term. \nSecond, L(x) defines an ensemble of equivalent neurons which is a subset of a given \nassembly al-'. A reduction of (12) to pure assembly activities is, in general, not \npossible. Finally, equivalent neurons that form an ensemble L(x) are not necessarily \nsituated next to each other. In fact, they may be distributed all over the network; \nIn this case a local ensemble average yields meaningless results. A \ncf. Fig. 2. \ntheoretical model based on local ensemble averaging is useful only if we know that \nneighboring neurons have the same 'identity card'. \n\na) \n\nb) \n\n~': t \n\n100 \n\nactivity \n\nl \n.....\u2022 : .. : .. :': ' .... : .. :.: .. : .. : ': ': -: \\ , \n\ntime [ms] \n.. \n.. \n\n.. .... \n\n150 \n\n200 \n\n.. \n\n.. \n\n.. \n\n.. \n\n.. \n\n.. \n\n.. \n\n20 \n\n30 \u2022 \u2022 \u2022 .. \n\n.. \n\n.. \n\n.. -. -... -. -. .. \n\n_ 20 \n.. \n~ \n.. \n.. \n.. \n~ 10 \u2022\u2022\u2022 -\n.. \" ...... ... -.- .................... \" .. \" .. \" .-\n\n.. \n\u2022\u2022 :. - ............ . \n.. .... .. \n.. \n.. \n.- .- .-.. \n\n.... .. \n.. \n\n.- ..... - .... -. \n\n.. \n.. \n\n.. \n.. \n\n.. \n\n.. \n\n.. \n\n.. \n\n.. \n\n.. \n\n.. \n\n.. \n\n.-\n\n.-\n\nI \n\n.. \n\nI \n\nill. \n\n10 \n\no\u00b7\u00b7\u00b7~\u00b7\u00b7\u00b7\u00b7-\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7\u00b7 0 \n100 \n\n150 \n\n200 \n\n0 \n\ntime [ms] \n\nrate [Hz] \n\n0) \n30 r:::::: \n\n:J \n\n::> \n\n:::I \n\n100 \n\n200 \n\nrate [Hz] \n\nFig. 2 \nStationary activity (incoherent \nfiring). In this case a descrip(cid:173)\ntion by firing rates is possible. \n(a) Ensemble averaged activity \n(b) Spike raster of 30 \nA(x, t). \nneurons out of a network of \n4000. (c) Time-averaged mean \nfiring rate f. We have two dif(cid:173)\nferent assemblies, one of them \nactive (d tr = 2 ms, f3 = 5). \n\n5.2 RATE CODE \n\nCan the system of Eqs. (12) -(14) be transformed into a rate description? In general, \nthis is not the case but if we assume that the ensemble activities are constant in \n\n\f468 \n\nGerstner and van Hemmen \n\n1.0 .---~--~----~--~--------~--~---.----~--~----~--, \n\nO.B \n\nO.B \n\n0.4 \n\n0.2 \n\n0.0 \n\no \n\n~.x-2 \n\n~.)(-3.5 \n\n100 \n\n200 \n\n300 \n\n400 \n\n500 \n\nBOO \n\nZeit [rn5] \n\nFig. 3 Stability of stationary states.The postsynaptic potential h~yn is plotted as a function \nof time. Every 100 ms the delay Lltr has been increased by 0.5 ms. In the stationary state \n(Lltr = 1.5 ms and Ll tr = 3.5 ms), active neurons fire regularly with rate T;l = 1/5.5 ms. \nFor a delay Ll tr > 3.5 ms, oscillations with period Wl = 27r /Tp build up rapidly. For \nintermediate delays 2 ~ Ll tr ~ 2.5 small-amplitude oscillations with twice the frequency \noccur. Higher harmonics are suppressed by noise (/3 = 20). \n\ntime, i.e., A(x, t) = A(x), then an exact reduction is possible. The result IS a \nfixed-point equation (Gerstner and van Hemmen 1992) \n\nwhere \n\nq \n\nq \n\nA(x) = f[Jo L L Q~lIpost(X~) L pre(zll)p(z)A(z)] \nf[h,yn] = {J.oo dsexp{-1.' r- 1[h,yn + ~\"J(8')]ds'}} -1 \n\n~=lll=l \n\nz \n\n(15) \n\n(16) \n\nis the mean firing rate (9) of a typical neuron stimulated by a synaptic input h3yn. \nConstant activities correspond to incoherent, stationary firing and in this case a \nrate code is sufficient; cf. Fig. 2. \n\nTwo points should, however, be kept in mind. First, a stationary state of incoherent \nfiring is not necessarily stable. In fact, in a noise-free system the stationary state \nis always unstable and oscillations build up (Gerstner and van Hemmen 1993). In \na system with noise, the stability depends on the noise level f3 and the delay Ll tr of \naxonal and synaptic transmission (Gerstner and van Hemmen 1994). This is shown \nin Fig. 3 where the delay Lltr has been increased every 100 ms. The frequency of \nthe small-amplitude oscillation around the stationary state is approximately equal \nto the mean firing rate (16) in the stationary state or higher harmonics thereof. \nA small-amplitude oscillation corresponds to partially synchronized activity. Note \nthat for Ll tr = 4 ms a large-amplitude oscillation builds up. Here all neurons fire in \nnearly perfect synchrony; cf. Fig. 4. In the noiseless case f3 -\n00, the oscillations \nperiod of such a collective or 'locked' oscillation can be found from the threshold \ncondition \n\nT\", = inf {s I 0 = ~\"J (8) + Jo ~ f(nS)} . \n\n(17) \nIn most cases the contribution with n = 1 is dominant which allows a simple graph(cid:173)\nTJref (s) with the \nical solution. The first intersection of the effective threshold () -\n\n\fHow to Describe Neuronal Activity: Spikes. Rates. or Assemblies? \n\n469 \n\nweighted EPSP JOf( s) yields the oscillation period; cf. Fig 1. An analytical argu(cid:173)\nment shows that locking is stable only if ;\" dTooc > 0 (Gerstner and van Hemmen \n1993). \n\na) \n\nactivity \n\n~:lliHHHlUHHHj \n\n1~ \n\n100 \n\n200 \n\nb) \n\ntime [ms] \n\n~ ................. . \nI f S S S S SIS ) S S S S \\ 'a \\ \\ \n,., 20 . . . . . . . . . . . . . . . . . . \n. . . . . . . . . . . . . . . . . . \n\n\u00b7 . \\ ' \\ \\ '111 \" . \u00b7 . \u00b7 '1 11 1 \n\\ \n\n~ \n! 10 \n\nI 1 \\ \u2022\u2022\u2022\u2022 , \n\n\\ \n\n\\ \n\n, \n\n\\ \n\n\\ \n\n\\ \n\n\\ \n\n\\ \n\n\\ \n\n\\ \n\n\\ \n\n0) \n\nrate [Hz] \n\n] \n\n10 t==:::;;= \n\n\"1 '111 \\ \\1 \\ \\1 \" ' .. 1 \no \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022\u2022 \n100 \n\n150 \n\n200 0 0 \n\ntime [ms] \n\n100 200 \n\nrate [Hz] \n\nFig. 4 \nOscillatory activity (coherent \nfiring). In this case a descrip(cid:173)\ntion by firing rates must be com(cid:173)\nbined with a description by en(cid:173)\nsemble activities. (a) Ensemble \naveraged activity A(x, t). \n(b) \nSpike raster of 30 neurons out \nof a network of 4000. (c) Time(cid:173)\naveraged mean firing rate f. In \nthis simulation, we have used \nLl tr = 4 ms and f3 = 8. \n\nSecond, even if the incoherent state is stable and attractive, there is always a transi(cid:173)\ntion time before the stationary state is assumed. During this time, a rate description \nis insufficient and we have to go back to the full dynamic equations (12) - (14). Sim(cid:173)\nilarly, if neurons are subject to a fast time-dependent external stimulus, a rate code \nfails . \n\n5.3 SPIKE CODE \n\nA superficial inspection of Eqs. (12) - (14) gives the impression that all information \nabout neuronal spiking has disappeared. This is, however, false. The term A(x, t-s) \nin (13) denotes all neurons with 'identity card' x that have fired at time t-s . The \nintegration kernel in (13) is the conditional probability that one of these neurons \nfires again at time t. Keeping t - s fixed and varying t we get the distribution \nof inter-spike intervals for neurons in L(x). Thus information on both spikes and \nintervals is contained in (13) and (14). \n\nWe can make use of this fact, if we consider network states where in every time step a \ndifferent assembly is active. This leads to a spatia-temporal spike pattern as shown \nin Fig. 5. To transform a specific spike pattern into a stable state of the network \nwe can use a Hebbian learning rule. However, in contrast to the standard rule, a \nsynapse is strenthened only if pre- and postsynaptic activity occurs simultaneously \nwithin a time window of a few ms (Gerstner et al. 1993). Note that in this case, \naveraging over time or space spoils the information contained in the spike pattern. \n\n5.4 CONCLUSIONS \n\n(12) - (14) show that in our large and fully connected network an \nEquations. \nensemble code with an appropriately chosen ensemble is sufficient. If, however, the \nefficacies (11) and the connection scheme become more involved, the construction \nof appropriate ensembles becomes more and more difficult. Also, in a finite network \nwe cannot make use of the law of large number in defining the activities (10). Thus, \nin general, we should always start with a network model of spiking neurons. \n\n\f470 \n\nGerstner and van Hemmen \n\na) \n\nactivity \n\n~~C =: ] \n\n100 \n\n200 \n\n150 \n\ntime [ms] \n\nb) \n\n30 \n\n.. 20 \ng \n!5 \n! 10 \n\n\u00b70 \n\n0 \n\no \n\n.. \n.. . \n\n\u2022 \n0 \n\n\u2022\u2022\u2022 \n\nrata [Hz] \n\n0) \n30 n-\"~----' \n\n.. \n\u2022 \n. .... \n\n\u2022 \n\ne. \n\n20 \n\n10 \n\nFig. 5 \nSpatio-temporal spike pattern. \nIn this case, neither firing rates \nnor locally averaged activities \ncontain enough information to \ndescribe the state of the net(cid:173)\n(a) Ensemble averaged \nwork. \nactivity A(t). (b) Spike raster of \n30 neurons out of a network of \n4000. ( c) Time-averaged mean \nfiring rate f. \n\no '--__ o-\".'___---:-~---\"'----~ 0 \n1()0 \n\n200 \n\n,-,-. (---''---'. \n0-\n\n100 200 \n\nrata [Hz] \n\nAcknowledgements: This work has been supported by the Deutsche Forschungs(cid:173)\ngemeinschaft (DFG) under grant No. He 1729/2-1. \n\nReferences \n\nGerstner W (1990) Associative memory in a network of 'biological' neurons. In: \nAdvances in Neural Information Processing Systems 3, edited by R.P. Lippmann, \nJ .E. Moody, and D.S. Touretzky (Morgan Kaufmann, San Mateo, CA) pp 84-90 \n\nGerstner Wand van Hemmen JL (1992a) Associative memory in a network of \n'spiking' neurons. Network 3:139-164 \n\nGerstner W, van Hemmen JL (1993) Coherence and incoherence in a globally cou(cid:173)\npled ensemble of pulse-emitting units. Phys. Rev. Lett. 71:312-315 \nGerstner W, Ritz R, van Hemmen JL (1993b) Why spikes? Hebbian learning and \nretrieval of time-resolved excitation patterns. BioI. Cybern. 69:503-515 \n\nGerstner Wand van Hemmen JL (1994) Coding and Information processing in \nneural systems. In: Models of neural networks, Vol. 2, edited by E. Domany, J .L. \nvan Hemmen and K. Schulten (Springer-Verlag, Berlin, Heidelberg, New York) pp \nIff \n\nHebb DO (1949) The Organization of Behavior. Wiley, New York \n\nvan Hemmen JL and Kiihn R(1991) Collective phenomena in neural networks. In: \nModels of neural networks, edited by E. Domany, J .L. van Hemmen and K. Schulten \n(Springer-Verlag, Berlin, Heidelberg, New York) pp Iff \n\nHubel DH, Wiesel TN (1962) Receptive fields, binocular interaction and functional \narchitecture in the cat's visual cortex. J. Neurophysiol. 28:215-243 \n\nMacKay DM, McCulloch WS (1952) The limiting information capacity of a neuronal \nlink. Bull. of Mathm. Biophysics 14:127-135 \n\nStein RB (1967) The information capacity of nerve cells using a frequency code. \nBiophys. J. 7:797-826 \n\n\f", "award": [], "sourceid": 850, "authors": [{"given_name": "Wulfram", "family_name": "Gerstner", "institution": null}, {"given_name": "J.", "family_name": "van Hemmen", "institution": null}]}