{"title": "Why Neuronal Dynamics Should Control Synaptic Learning Rules", "book": "Advances in Neural Information Processing Systems", "page_first": 285, "page_last": 292, "abstract": null, "full_text": "Why neuronal dynamics should control \n\nsynaptic learning rules \n\nJesper Tegner \n\nStockholm Bioinformatics Center \n\nDept. of Numerical Analysis \n\n& Computing Science \n\nRoyal Institute for Technology \n\nS-10044 Stockholm, Sweden \n\njespert@nada.kth.se \n\nAdam Kepecs \n\nVolen Center for Complex Systems \n\nBrandeis University \nWaltham, MA 02454 \nkepecs@brandeis.edu \n\nAbstract \n\nHebbian learning rules are generally formulated as static rules. Un(cid:173)\nder changing condition (e.g. neuromodulation, input statistics) \nmost rules are sensitive to parameters. In particular, recent work \nhas focused on two different formulations of spike-timing-dependent \nplasticity rules. Additive STDP [1] is remarkably versatile but \nalso very fragile, whereas multiplicative STDP [2, 3] is more ro(cid:173)\nbust but lacks attractive features such as synaptic competition and \nrate stabilization. Here we address the problem of robustness in \nthe additive STDP rule. We derive an adaptive control scheme, \nwhere the learning function is under fast dynamic control by post(cid:173)\nsynaptic activity to stabilize learning under a variety of conditions. \nSuch a control scheme can be implemented using known biophysical \nmechanisms of synapses. We show that this adaptive rule makes \nthe addit ive STDP more robust. Finally, we give an example how \nmeta plasticity of the adaptive rule can be used to guide STDP \ninto different type of learning regimes. \n\n1 \n\nIntroduction \n\nHebbian learning rules are widely used to model synaptic modification shaping the \nfunctional connectivity of neural networks [4, 5]. To ensure competition between \nsynapses and stability of learning, constraints have to be added to correlational Heb(cid:173)\nbian learning rules [6]. Recent experiments revealed a mode of synaptic plasticity \nthat provides new possibilities and constraints for synaptic learning rules [7, 8, 9]. \nIt has been found that synapses are strengthened if a presynaptic spike precedes a \npostsynaptic spike within a short (::::: 20 ms) time window, while the reverse spike \norder leads to synaptic weakening. This rule has been termed spike-timing depen(cid:173)\ndent plasticity (STDP) [1] . Computational models highlighted how STDP combines \nsynaptic strengthening and weakening so that learning gives rise to synaptic com(cid:173)\npetition in a way that neuronal firing rates are stabilized. \n\nRecent modeling studies have, however, demonstrated that whether an STDP type \n\n\frule results in competition or rate stabilization depends on exact formulation of the \nweight update scheme [3, 2]. Sompolinsky and colleagues [2] introduced a distinc(cid:173)\ntion between additive and multiplicative weight updating in STDP. In the additive \nversion of an STDP update rule studied by Abbott and coworkers [1, 10], the magni(cid:173)\ntude of synaptic change is independent on synaptic strength. Here, it is necessary to \nadd hard weight bounds to stabilize learning. For this version of the rule (aSTDP), \nthe steady-state synaptic weight distribution is bimodal. In sharp contrast to this, \nusing a multiplicative STDP rule where the amount of weight increase scales in(cid:173)\nversely with present weight size produces neither synaptic competition nor rate \nnormalization [3, 2]. In this multiplicative scenario the synaptic weight distribution \nis unimodal. Activity-dependent synaptic scaling has recently been proposed as \na separate mechanism to ensure synaptic competition operating on a slow (days) \ntime scale [3]. Experimental data as of today is not yet sufficient to determine the \ncircumstances under which the STDP rule is additive or multiplicative. \n\nIn this study we examine the stabilization properties of the additive STDP rule. In \nthe first section we show that the aSTDP rule normalizes postsynaptic firing rates \nonly in a limited parameter range. The critical parameter of aSTDP becomes the \nratio (0;) between the amount of synaptic depression and potentiation. We show \nthat different input statistics necessitate different 0; ratios for aSTDP to remain \nstable. This lead us to consider an adaptive version of aSTDP in order to create a \nrule that is both competitive as well as rate stabilizing under different circumstances. \n\nNext, we use a Fokker-Planck formalism to clarify what determines when an ad(cid:173)\nditive STDP rule fails to stabilize the postsynaptic firing rate. Here we derive \nthe requirement for how the potentiation to depression ratio should change with \nneuronal activity. In the last section we provide a biologically realistic implemen(cid:173)\ntation of the adaptive rule and perform numerical simulations to show the how \ndifferent parameterizations of the adaptive rule can guide STDP into differentially \nrate-sensitive regimes. \n\n2 Additive STDP does not always stabilize learning \n\nFirst, we numerically simulated an integrate-and-fire model receiving 1000 excita(cid:173)\ntory and 250 inhibitory afferents. The weights of the excitatory synapses were up(cid:173)\ndated according to the additive STDP rule. We used the model developed by Song et \nal, 2000 [1]. The learning kernel L(T) is A+exp(T/T+) if T < 0 or -A_ exp( -T/L) \nif T > 0 where A_ / A+ denotes the amplitude of depression/potentiation respec(cid:173)\ntively. Following [1] we use T + = T _ = 20 ms for the time window of learning. The \nintegral over the temporal window of the synaptic learning function (L) is always \nnegative. Synaptic weights change according to \n\ndWi J \n\nill = \n\nL(T)Spre(t + T)Spost(T)dT , Wi E[O,Wmax ] \n\n(1) \n\nwhere s(t) denotes a delta function representing a spike at time t. Correlations \nbetween input rates were generated by adding a common bias rate in a graded \nmanner across synapses so that the first afferent is has zero while the last afferent \nhas the maximal correlation, Cmax . \nWe first examine how the depression/potentiation ratio (0; = LTD / LT P) [2] con(cid:173)\ntrols the dependence of the output firing rate on the synaptic input rate, here \nreferred to as the effective neuronal gain. Provided that 0; is sufficiently large, the \nSTDP rule controls the postsynaptic firing rate (Fig. 1A). The stabilizing effect of \nthe STDP rule is therefore equivalent to having weak a neuronal gain. \n\n\fB \n\n250 \n\nc ,---~--------~ \n\n250 \n\n600 \n\n500 \n\n100 \n\n10 \n\n~ ~ ~ W W M \n\nInpul Rate (liz) \n\n00 \n\n00 \n\n-; \n\n200 \n\nIncreasing \n\n, \n;; mD \nj:: I'lP\"'C'~~ \n50~ \n% m w w w ~ ~ \n\n0 -o \n\n150 \n\nIncreasing \nLTDlt.TPratios \n\nt.05 Reference Ratio \n\nI \n\n': I~ \n\nInput Rale(Hz) \n\n~ \n\nW \nInput Rate (Hz) \n\n00 \n\n00 \n\n100 \n\nFigure I: A STDP controls neuronal gain. The slope of the dependence of the postsynap(cid:173)\ntic output rate on the presynaptic input rate is referred to as the effective neuronal gain. \nThe initial firing rate is shown by the upper curve while the lower line displays the final \npostsynaptic firing rate. The gain is reduced provided that the depression/potentiation \nratio (0: = 1.05 here) is large enough. The input is uncorrelated. B Increasing input \ncorrelations increases neuronal gain. When the synaptic input is strongly correlated \nthe postsynaptic neuron operates in a high gain mode characterized by a larger slope \nand larger baseline rate. Input correlations were uniformly distributed between 0 and a \nmaximal value, Cm a x . The maximal correlation increases in the direction of the arrow: \n0.0; 0.2 ; 0.3; 0.4; 0.5; 0.6; 0.7. The 0: ratio is 1.05. Note that for further increases in \nthe presynaptic rates, postsynaptic firing can increase to over 1000 Hz. C The depres(cid:173)\nsion/potentiation ratio sets the neuronal gain. The 0: ratios increase in the direction of \narrow:1.025;1.05;1.075;1.1025;1.155;1.2075. Cm a x is 0.5. \n\nWe find that the neuronal gain is extremely sensitive to the value of 0: as well \nas to the amount of afferent input correlations. Figure IB shows that increasing \nthe amount of input correlations for a given 0: value increases the overall firing \nrate and the slope of the input-output curve, thus leading to larger effective gain. \nIncreasing the amount of correlations between the synaptic afferents could therefore \nbe interpreted as increasing the effective neuronal gain. Note that the baseline firing \nat a presynaptic drive of 20Hz is also increased. Next, we examined how neuronal \ngain depends on the value of 0: in the STDP rule (Figure IC). The high gain and \nhigh rate mode induced by strong input correlations was reduced to a lower gain \nand lower rate mode by increasing 0: (see arrow in Figure IC). Note, however, that \nthere is no correct 0: value as it depends on both the input statistics as well as the \ndesired input/output relationship. \n\n3 Conditions for an adaptive additive STDP rule \n\nHere we address how the learning ratio, 0:, should depend on the input rate in or(cid:173)\nder to produce a given neuronal input-output relationship. Using this functional \nform we will be able to formulate constraints for an adaptive additive STDP rule. \nThis will guide us in the derivation of a biophysical implementation of the adap(cid:173)\ntive control scheme. The problem in its generality is to find (i) how the learning \nratio should depend on the postsynaptic rate and (ii) how the postsynaptic rate \ndepends on the input rate and the synaptic weights. By performing self-consistent \ncalculations using a Fokker-Planck formulation, the problem is reduced to finding \nconditions for how the learning ratio should depend on the input rates only. \n\nLet 0: denote depression/potentiation ratio 0: = LTD/LTP as before. Now we \n\n\fA 30 \n\nouput rate \n\nB \n\nWTOT \n\nmeanw \n\nC \n\n,-------------~ 0.6,-------------~ \n\n40,-------------~ \n\n35 \n\n.. ~ . ..\u2022...... .. .. .. .. .. .. \u2022...... .. .. .. .. .. .... .. ... .. .. ... \u2022....... \n\n. . -- ... - . . . . : . .\n\n30 \n\n. . ..\n\n. \n\n.\n\n. \n\n. \n\n25 \n\n25 \n\n20 \n\n15 \n\n0.5 \n\n0.4 \n\n0.3 \n0.2 \n\n0.1 \n\nD \n\no. 1 ,-----,----==:=::=--,---~ \n\n0.05 T\u00b7\u00b7\u00b7\u00b7 \n\n\u2022 \u2022 \u2022\u2022\u2022\u2022\u2022 \u2022 \u2022\u2022\u2022\u2022 \u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022\u2022 \u2022 \u2022 \u2022\u2022\u2022\u2022 \u2022\u2022\u2022 \u2022 \n. \n\n. \n\n. \n\n. \n\n\u00b00L--2~0---4~0--6~0---8~0 \n\ninput rate \n\n0.5 \nw \n\n0.5 \nw \n\nFigure 2: Self consistent Fokker-Planck calculations. Conditions for zero neuronal gain. U \nA The output rate does not depend on the input rate. Zero neuronal gain. B Dependence \nof the mean synaptic weight on input rates. C W t ot ex: Tpre < W >, see text. D The \ndependence of j3 = a - 1 on input rate. E,F A( w) and P( w) are functions of the synaptic \nstrength and depend on the input rate .. Note that eight different input rates are used but \nonly traces 1, 3, 5, 7 are shown for A(w) and pew) in which the dashed line correspond \nto the case with the lowest presynaptic rate. \n\ndetermine how the parameter fJ = 0: - 1 should scale with presynaptic rates in \norder to control the neuronal gain. The Fokker-Planck formulation permits an \nanalytic calculation of the steady state distribution of synaptic weights [3]. The \ncompetition parameter for N excitatory afferents is given by Wtot = twrpreN < w > \nwhere the time window tw is defined as the probability for depression (Pd = tw/tisi) \nthat a synaptic event occurs within the time window (tw < tisi ). The amount \nof potentiation and depression for the additive STDP yields in the steady-state, \nneglecting the exponential timing dependence, the following expression for the drift \nterm A(w) \n\nA(w) = PdA-[W/Wtot - (1 - 1/0:)] \n\n(2) \n\nA( w) represents the net weight \"force field\" experienced by an individual synapse. \nThus, A( w) determines whether a given synapse (w) will increase or decrease as \na function of its synaptic weight. The steepness of the A( w) function determines \nthe degree of synaptic competition. The w /Wtot is a competition term whereas the \n(1 - 1/0:) provides a destabilizing force. When Wmax > (1 - l/o:)Wtot the synaptic \nweight distribution is bimodal. The steady state distribution reads \n\nP(w) = Ke[(-w(1-1 /a) +w 2 /(2 Wt o t ))/(A _ )] \n\n(3) \n\nwhere K normalizes the P(w) distribution [3]. \n\nNow, equations (2-3), with appropriate definitions of the terms, constitute a self(cid:173)\nconsistent system. Using these equations one can calculate how the parameter fJ \n\n\fshould scale with the presynaptic input rate in order to produce a given postsynaptic \nfiring rate. For a given presynaptic rate, equations (2-3) can be iterated in until a \nself-consistent solution is found. At that point, the postsynaptic firing rate can be \ncalculated. Here, instead we impose a fixed postsynaptic output rate for a given \ninput rate and search for a self-consistent solution using (3 as a free parameter. \nPerforming this calculation for a range of input rates provides us with the desired \ndependency of (3 on the presynaptic firing rate. Once a solution is reached we \nalso examine the resulting steady state synaptic weight distribution (P(w)) and the \ncorresponding drift term A( w) as a function of the presynaptic input rate. \n\nThe results of such a calculation are illustrated in Figure 2. The neuronal gain, \nthe ratio between the postsynaptic firing rate and the input rate is set to be zero \n(Fig 2A). To normalize postsynaptic firing rates the average synaptic weight has \nto decrease in order to compensate for the increasing presynaptic firing rate. This \ncan be seen in (Fig 2B). The condition for a zero neuronal gain is that the average \nsynaptic weight should decrease as 1 j r pre . This makes Wtot constant as shown \nin Fig 2C. For these values, (3 has to increase with input rate as shown in Fig \n2D. Note that this curve is approximately linear. The dependence of A( w) and the \nsynaptic weight distribution P( w) on different presynaptic rates is illustrated in Fig \n2E and F. As the presynaptic rates increase, the A(w) function is lowered (dashed \nline indicates the smallest presynaptic rate), thus pushing more synapses to smaller \nvalues since they experience a net negative \"force field\". This is also reflected in the \nsynaptic weight distribution which is pushed to the lower boundary as the input \nrates increase. When enforcing a different neuronal gain, the dependence of the \n(3 term on the presynaptic rates remains approximately linear but with a different \nslope (not shown). \n\n4 Derivation of an adaptive learning rule with biophysical \n\ncomponents \n\nThe key insight from the above calculations is the observed linear dependence of (3 on \npresynaptic rates. However, when implementing an adaptive rule with biophysical \nelements it is very likely that individual components will have a non-linear depen(cid:173)\ndence on each other. The Fokker-Planck analysis suggests that the non-linearities \nshould effectively cancel. Why should the system be linear? Another way to see \nfrom where the linearity requirement comes is that the (w jWtot -\nsion for A(w) (valid for small (3) has to be appropriately balanced when the input \nrates increases. The linearity of (3(rpr e ) follows from Wtot being linear in r pre . \nNow, how could (3 depend on presynaptic rates? A natural solution would be to use \npostsynaptic calcium to measure the postsynaptic firing and therefore indirectly the \npresynaptic firing rate. Moreover, the asymmetry ((3) of the learning ratio could \ndepend on the level of postsynaptic calcium. It is known that increased resting \ncalcium levels inhibit NMDA channels and thus calcium influx due to synaptic input. \nAdditionally, the calcium levels required for depression are easier to reach. Both of \nthese effects in turn increase the probability of LTD induction. Incorporating these \nintermediate steps gives the following scheme: \n\n(3) term in expres(cid:173)\n\nq c \n+-'-+ a t-=--+ r po st +-'---+ r pr e \n\nh \n\np \n\n(3 \n\nThis scheme introduces parameters (p and q) and a function Ut} to control for \nthe linearity jnon-linearity between the variables. The global constraint from the \nFokker-Planck is that the effective relation between (3 and r pre should be linear. A \nbiophysical formulation of the above scheme is the following \n\n\f200 \n\n150 \n\n~loo \n5 \no \n\n50 \n\nNo Adaptive Tracking \n\ni-2:WlUlliWWU] \n\n~-40 \n> \n\n-60 \n\n0~----------~5~00~--------~10~0~0----------~1~500 \n\nAdaptive Tracking \n\n'\u00b7'r ~A_ .~ 1 \n~'ll V'~ \"1 \n\n20 \n\n60 \n40 \ninput rat. \n\n80 \n\n100 \n\n'0 \n\n500 \n\nTime (ms) \n\n1000 \n\n1500 \n\nFigure 3: Left Steady-state response with (squares) or without (circles) the adaptive \ntracking scheme. When the STDP rule is extended with an adaptive control loop, the \noutput rates are normalized in the presence of correlated input. Right Fast adaptive \ntracking. Since (3 tracks changes in intracellular calcium on a rapid time-scale, every spike \nexperiences a different learning ratio, 0:. Note that the adaptive scheme approximates the \nlearning ratio (0: = 1.05) used in [1]. \n\nT(3 -\n\nd(3 \ndt \n\n= - (3 + [Ca]q \n\n(4) \n\n(5) \n\nThe parameter p determines how the calcium concentration scales with the post(cid:173)\nsynaptic firing rate (delta spikes r5 above) and q controls the learning sensitivity. \"( \ncontrols the rise of steady-state calcium with increasing postsynaptic rates (rpost). \nThe time constants TCa and T(3 determine the calcium dynamics and the time course \nof the adaptive rule respectively. Note that we have not specified the neuronal \ntransfer function, it. \nTo ensure a linear relation between (3 and r pre it follows from the Fokker-Planck \nanalysis that [it (rpre)]pq is approximately linear in r pre . The neuronal gain can \nnow be independently be controlled by the parameter T Moreover , the drift term \nA( w) becomes \n\nfor (3 < < 1. A( w) can be written in this form since we use that Wd \n- A_ = \n-A+CI: = -A+( l + [TCa\"(r~ost]q). The w/Wtot is a competition term whereas \nthe [TCa\"(r~ost]q provides a destabilizing force. Note also, that when W max > \n[TCa\"(r~ost]qWtot there is a bimodal synaptic weight distribution and synaptic com(cid:173)\npetition is preserved. A complete stability analysis is beyond the scope of the \npresent study. \n\n(6) \n\n\fA \n.-.. 75 \nN \n:E-\nO) 50 \n\n-1\\1 ... - 25 \nCo -:::J \n\n:::J \n\n0 \n\n0 \n0 \n\nB \n75 \n\n50 \n\n25 \n\n0 \n0 \n\n50 75 100 \n\n25 \nInput rate (Hz) \n\nC \n75 ...\u2022 ~. \n\n50 \n\n.... ~ . ; \n\n50 \n\n25 \nInput rate (Hz) \n\n75 100 \n\n25 \n\n0 \n0 \n\n50 \n\n25 \nInput rate (Hz) \n\n75 100 \n\nFigure 4: Full numerical simulation of the adaptive additive STDP rule. Parameters: \np = q = 1. T ea = 10ms, T f! = lOOms. A I = 1.25. B I = 0.25. C Input correlations are \nCmax = 0, 0.3, 0.6 \n\n5 Numerical simulations \n\nNext, we examine whether the theory of adaptive normalization carryover to a \nfull scale simulation of the integrate-and-fire model with the STDP rule and the \nbiophysical adaptive scheme as described above. First, we studied the neuronal \ngain (cf. Figure 1) when the inputs were strongly correlated. Driving a neuron \nwith increasing input rates increases the output rate significantly when there is \nno adaptive scheme (squares, Figure 3 Left) as observed previously (cf. Figure \nIB). Adding the adaptive loop normalizes the output rates (circles, Figure 3 Left). \nThis simulation shows that the average postsynaptic firing rate is regulated by \nthe adaptive tracking scheme. This is expected since the Fokker-Planck analysis \nis based on the steady-state synaptic weight distribution. To further gain insight \ninto the operation of the adaptive loop we examined the spike-to-spike dependence \nof the tracking scheme. Figure 3 (Right) displays the evolution of the membrane \npotential (top) and the learning ratio 0: = 1 + (3 (bottom) . The adaptive rule \ntracks fast changes in firing by adjusting the learning ratio for each spike. Thus, \nthe strength plasticity is different for every spike. Interestingly, the learning ratio \n(0:) fluctuates around the value 1.05 which was used in previous studies [1] . Our \nfast , spike-to-spike tracking scheme is in contrast to other homeostatic mechanisms \noperating on the time-scale of hours to days [11 , 12, 13, 14]. In our formulation, the \nlearning ratio, via (3, tracks changes in intra-cellular calcium, which in turn reflects \nthe instantaneous firing rate. Slower homeostatic mechanisms are unable to detect \nthese rapid changes in firing statistics. Because this fast adaptive scheme depends \non recent neuronal firing, pairing several spikes on the time-scale comparable to the \ncalcium dynamics introduces non-linear summation effects. \n\nNeurons with this adaptive STDP control loop can detect changes in the input \ncorrelation while being only weakly dependent on the presynaptic firing rate. Figure \n4a and 4b show two different regimes corresponding to two different values of the \nparameter , . In the high , regime (Fig. 4a) the neuronal gain is zero. The neuronal \ngain increased when , decreased (Fig. 4b) as expected from the theory. \nIn a \ndifferent regime where we introduce increasing correlations between the synaptic \ninputs [1] we find that the neuronal gain is changed little with increasing input \nrates but increases substantially with increasing input correlations (Fig 4c) . Thus, \nthe adaptive aSTDP rule can normalize the mean postsynaptic rate even when the \ninput statistics change. With other adaptive parameters we also found learning \nregimes where the responses to input correlations were affected differentially (not \nshown). \n\n\f6 Discussion \n\nSynaptic learning rules have to operate under widely changes conditions such as \ndifferent input statistics or neuromodulation. How can a learning rule dynami(cid:173)\ncally guide a network into functionally similar operating regime under different \nconditions? We have addressed this issue in the context of spike-timing-dependent \nplasticity (STDP) \n[1, 10J. We found that STDP is very sensitive to the ratio of \nsynaptic strengthening to weakening, (t, and requires different values for different \ninput statistics. To correct for this, we proposed an adaptive control scheme to \nadjust the plasticity rule. This adaptive mechanisms makes the learning rule more \nrobust to changing input conditions while preserving its interesting properties, such \nas synaptic competition. We suggested a biophysically plausible mechanism that \ncan implement the adaptive changes consistent with the requirements derived using \nthe Fokker-Planck analysis. \n\nOur adaptive STDP rule adjusts the learning ratio on a millisecond time-scale. \nThis in contrast to other, slow homeostatic controllers considered previously \n[11 , 12, 13, 14, 3J. Because the learning rule changes rapidly, it is very sensitive \nthe input statistics. Furthermore, the synaptic weight changes add non-linearly \ndue to the rapid self-regulation. In recent experiments similar non-linearities have \nbeen detected (Y. Dan, personal communication) which might have roles in mak(cid:173)\ning synaptic plasticity adaptive. Finally, the new set of adaptive parameters could \nbe independently controlled by meta-plasticity to bring the neuron into different \noperating regimes. \n\nAcknowledgments \n\nWe thank Larry Abbott , Mark van Rossum, and Sen Song for helpful discussions. \nJ.T. was supported by the Wennergren Foundation, and grants from Swedish Medi(cid:173)\ncal Research Foundation, and The Royal Academy for Science. A.K. was supported \nby the NIH Grant 2 ROI NS27337-12 and 5 ROI NS27337-13. Both A.K. and J.T. \nthank the Sloan Foundation for support. \n\nReferences \n[1] Song, S., Miller, K , & Abbott, L. Nature N euroscience, 3:919-926, 2000. \n[2] Rubin, J., Lee, D. , & Sompolinsky, H. Physical Review Letter, 86:364-367, 200l. \n[3] van Rossum, M., G-Q , B. , & Thrrigiano, G. J Neurosci, 20:8812- 8821, 2000. \n[4] Sejnowski, T. J Th eoretical Biology, 69:385- 389, 1997. \n[5] Abbott, L. & Nelson, S. Nature Neuroscience, 3:1178- 1183, 2000. \n[6] Miller, K & MacKay, D. Neural Computation, 6:100- 126, 1994. \n[7] Markram , H., Lubke, J., Frotscher, M., & Sakmann, B. Science, 275:213- 215, 1997. \n[8] Bell, C., Han, V. , Sugawara, Y. , & Grant, K Nature, 387:278- 81 , 1997. \n[9] Bi, G.-Q. & Poo, M. J Neuroscience, 18:10464- 10472, 1998. \n[10] Kempter, R., Gerstner, W., & van Hemmen, J. N eural Computation, 13:2709- 2742 , \n\n200l. \n\n[11] Bell, A. In Moody, J. , Hanson, S. , & Lippmann, R., editors, Advances in Neural \n\nInformation Processing Systems, volume 4. Morgan-Kaufmann, 1992. \n[12] LeMasson, G. , Marder, E. , & Abbott , L. Science, 259:1915- 7, 1993. \n[13] Thrrigiano, G. , Leslie, K , Desai, N., Rutherford, L. , & Nelson, S. Nature, 391:892- 6, \n\n1998. \n\n[14] Thrrigiano, G. & Nelson, S. Curr Opin N eurobiol, 10:358- 64, 2000. \n\n\f", "award": [], "sourceid": 1957, "authors": [{"given_name": "Jesper", "family_name": "Tegn\u00e9r", "institution": null}, {"given_name": "\u00c1d\u00e1m", "family_name": "Kepecs", "institution": null}]}