{"title": "Firing rate predictions in optimal balanced networks", "book": "Advances in Neural Information Processing Systems", "page_first": 1538, "page_last": 1546, "abstract": "How are firing rates in a spiking network related to neural input, connectivity and network function? This is an important problem because firing rates are one of the most important measures of network activity, in both the study of neural computation and neural network dynamics. However, it is a difficult problem, because the spiking mechanism of individual neurons is highly non-linear, and these individual neurons interact strongly through connectivity. We develop a new technique for calculating firing rates in optimal balanced networks. These are particularly interesting networks because they provide an optimal spike-based signal representation while producing cortex-like spiking activity through a dynamic balance of excitation and inhibition. We can calculate firing rates by treating balanced network dynamics as an algorithm for optimizing signal representation. We identify this algorithm and then calculate firing rates by finding the solution to the algorithm. Our firing rate calculation relates network firing rates directly to network input, connectivity and function. This allows us to explain the function and underlying mechanism of tuning curves in a variety of systems.", "full_text": "Firing rate predictions in optimal balanced networks\n\nDavid G.T. Barrett\n\nGroup for Neural Theory\n\u00b4Ecole Normale Sup\u00b4erieure\n\nParis, France\n\ndavid.barrett@ens.fr\n\nSophie Den`eve\n\nGroup for Neural Theory\n\u00b4Ecole Normale Sup\u00b4erieure\n\nParis, France\n\nsophie.deneve@ens.fr\n\nChristian K. Machens\n\nChampalimaud Neuroscience Programme\nChampalimaud Centre for the Unknown\n\nchristian.machens@neuro.fchampalimaud.org\n\nLisbon, Portugal\n\nAbstract\n\nHow are \ufb01ring rates in a spiking network related to neural input, connectivity and\nnetwork function? This is an important problem because \ufb01ring rates are a key\nmeasure of network activity, in both the study of neural computation and neural\nnetwork dynamics. However, it is a dif\ufb01cult problem, because the spiking mech-\nanism of individual neurons is highly non-linear, and these individual neurons\ninteract strongly through connectivity. We develop a new technique for calculat-\ning \ufb01ring rates in optimal balanced networks. These are particularly interesting\nnetworks because they provide an optimal spike-based signal representation while\nproducing cortex-like spiking activity through a dynamic balance of excitation and\ninhibition. We can calculate \ufb01ring rates by treating balanced network dynamics\nas an algorithm for optimising signal representation. We identify this algorithm\nand then calculate \ufb01ring rates by \ufb01nding the solution to the algorithm. Our \ufb01ring\nrate calculation relates network \ufb01ring rates directly to network input, connectivity\nand function. This allows us to explain the function and underlying mechanism of\ntuning curves in a variety of systems.\n\n1\n\nIntroduction\n\nThe \ufb01ring rate of a neuron is arguably the most important characterisation of both neural network\ndynamics and neural computation, and has been ever since the seminal recordings of Adrian and\nZotterman [1] in which the \ufb01ring rate of a neuron was observed to increase with muscle tension. A\nlarge, sometimes bewildering, diversity of \ufb01ring rate responses to stimuli have since been observed\n[2], ranging from sigmoidal-shaped tuning curves [3, 4], to bump-shaped tuning curves [5], with\nmuch diversity in between [6]. What is the computational role of these \ufb01ring rate responses and how\nare \ufb01ring rates determined by neuron dynamics, network connectivity and neural input?\nThere have been many attempts to answer these questions, using a variety of experimental and\ntheoretical techniques. However, most approaches have struggled to deal with the non-linearity of\nneural spike-generation mechanisms and the strong interaction between neurons as mediated through\nnetwork connectivity. Signi\ufb01cant progress has been made using linear approximations. For example,\nexperimentally recorded \ufb01ring rates in a variety of systems have been described using the linear\nreceptive \ufb01eld, which captures the linear relationship between stimulus and \ufb01ring rate response [7].\nHowever, in recent years, it has been found that this linear approximation often fails to capture\nimportant aspects of neural activity [8]. Similarly, in theoretical studies, linear approximations\n\n1\n\n\fhave been used to simplify non-linear \ufb01ring rate calculations in a variety of network models, using\nTaylor Series approximations [9], and more recently, using linear response theory [10, 11]. These\ncalculations have led to important insights into how neural network connectivity and input determine\n\ufb01ring rates. Again, however, these calculations only apply to a restricted subset of situations, where\nthe linearising assumptions apply.\nWe develop a new technique for calculating \ufb01ring rates, by directly identifying the non-linear struc-\nture of tightly balanced networks. Balanced network theory has come to be regarded as the standard\nmodel of cortical activity [12, 13], accounting for a large proportion of observed activity through\na dynamic balance of excitation and inhibition [14]. Recently, it was found that tightly balanced\nnetworks are synonymous with ef\ufb01cient coding, in which a signal is represented optimally subject\nto metabolic costs [15]. This observation allows us, here, to interpret balanced network activity as\nan optimisation algorithm. We can then directly identify that the non-linear relationship between\n\ufb01ring rates, input, connectivity and neural computation is provided by this algorithm. We use this\ntechnique to calculate \ufb01ring rates in a variety of balanced network models, thereby exploring the\ncomputational role and underlying network mechanisms of monotonic \ufb01ring rate tuning curves,\nbump-shaped tuning curves and tuning curve inhomogeneity.\n\n2 Optimal balanced network models\n\n1a).\n\nWe calculate \ufb01ring rates in a balanced network consisting of N recurrently connected leaky\nintegrate-and-\ufb01re neurons (Fig.\nThe network is driven by an input signal I =\n(I1, . . . , Ik, . . . IM ), where Ik is the kth input and M is the dimension of the input.\nIn re-\nsponse to this input, neurons produce spike trains, denoted by s = (s1, . . . , si, . . . , sN ), where\n\nk) is the spike train of neuron i with spike times(cid:8)ti\n\n(cid:9). A spike is produced\n\nsi(t) = (cid:80)\n\nwhenever the membrane potential Vi exceeds the spiking threshold Ti of neuron i. This simple\nspike rule captures the essence of a neural spike-generation mechanism. The membrane potential\nhas the following dynamics:\n\nk \u03b4(t \u2212 ti\n\nk\n\nN(cid:88)\n\nM(cid:88)\n\nk=1\n\nj=1\n\n= \u2212\u03bbVi +\n\ndVi\ndt\n\n\u2126iksk +\n\nFijIj ,\n\n(1)\n\nwhere \u03bb is the neuron leak, \u2126ik is connection strength from neuron k to neuron i and Fij is the\nconnection strength from input j to neuron i [16]. When a neuron spikes, the membrane potential\nis reset to Ri \u2261 Ti + \u2126ii. This is written in equation 1 as a self-connection. Throughout this work,\nwe focus on networks where connectivity \u2126 is symmetric - this simpli\ufb01es our analysis, although in\ncertain cases we can generalise to non-symmetric matrices.\nWe are interested in networks where a balance of excitation and inhibition coincides with opti-\nmal signal representation. Not all choices of network connectivity and spiking thresholds will give\nboth [12, 13], but if certain conditions are satis\ufb01ed, this can be possible. Before we proceed to our\n\ufb01ring rate calculation, we must derive these conditions.\nWe begin by calculating the sum total of excitatory and inhibitory input received by neurons in our\nnetwork. This is given by solving equation 1 implicitly:\n\nM(cid:88)\n\nj=1\n\nk=1\n\nN(cid:88)\n(cid:90) \u221e\n(cid:90) \u221e\n\n0\n\nVi =\n\n\u2126ikrk +\n\nFijxj ,\n\nrk =\n\nxj =\n\n\u2212\u03bbt(cid:48)\n\nsk(t \u2212 t\n\n(cid:48)\n\n(cid:48)\n) dt\n\n,\n\ne\n\n\u2212\u03bbt(cid:48)\n\nIj(t \u2212 t\n\n(cid:48)\n\n(cid:48)\n) dt\n\n.\n\ne\n\n(2)\n\n(3)\n\n(4)\n\nwhere rk is a temporal \ufb01ltering of the kth neuron\u2019s spike train\n\nand xj is a temporal \ufb01ltering of the jth input\n\nAll the excitatory and inhibitory inputs received by neuron i are included in this summation (Eqn.\n2). This can be rewritten as the slope of a loss function as follows:\n\n0\n\nVi = \u2212 1\n2\n\ndE(r)\n\ndri\n\n,\n\n2\n\n(5)\n\n\fwhere\n\nE(r) = \u2212rT \u2126r \u2212 2rT Fx + c\n\n(6)\n\nand c is a constant.\nNow, we can use this expression to derive the conditions that connectivity must satisfy so that the\nnetwork operates in an optimal balanced state. In balanced networks, excitation and inhibition cancel\nto produce an input that is the same order of magnitude as the spiking threshold. This is very small,\nrelative to the magnitude of excitation or inhibition alone [12, 13]. In tightly balanced networks,\nwhich we consider, this cancellation is so precise that Vi \u2192 0 in the large network limit (for all\nactive neurons) [15, 17, 18]. Now, using equation 5, we can see that this tight balance condition is\nequivalent to saying that our loss function (Eqn. 6) is minimised.\nThis has two implications for our choice of network connectivity and spiking thresholds. First,\nthe loss function must have a minimum. To guarantee this, we require \u2212\u2126 to be positive de\ufb01nite.\nSecondly, the spiking threshold of each neuron must be chosen so that each spike acts to minimise\nthe cost function. This spiking condition can be written as E(no spike) > E(with spike). Using\nequation 6, this can be rewritten as E(no spike) > E(no spike) \u2212 2[\u2126r]k \u2212 2[Fx]k \u2212 \u2126kk. Finally,\n\nFigure 1: Optimal balanced network example. (A) Schematic of a balanced neural network pro-\nviding an optimal spike-based representation \u02c6x of a signal x. (B) A tightly balanced network can\nproduce an output \u02c6x1 (blue, top panel) that closely matches the signal x1 (black, top panel). Pop-\nulation spiking activity is represented here using a raster plot (middle panel), where each spike is\nrepresented with a dot. For a randomly chosen neuron (red, middle panel), we plot the total ex-\ncitatory input (green, bottom panel) and the total inhibitory input (red, bottom panel). The sum of\nexcitation and inhibition (black, bottom panel) \ufb02uctuates about the spiking threshold (thin black line,\nbottom panel) indicating that this network is tightly balanced. A spike is produced whenever this\nsum exceeds the spiking threshold. (C) Firing rate tuning curves are measured during simulations of\nour balanced network. Each line represents the tuning curve of a single neuron. The representation\nerror at each value of x1 is given by equation 7.\n\n3\n\n(A) (C) (B) time (sec) x,\u02c6xx\u02c6x\fcancelling terms, and using equation 2, we can write our spiking condition as Vk > \u2212\u2126kk/2.\nTherefore, the spiking threshold for each neuron must be set to Tk \u2261 \u2212\u2126kk/2, though this condition\ncan be relaxed considerably if our loss function has an additional linear cost term1. Once these\nconditions are satis\ufb01ed, our network is tightly balanced.\nWe are interested in networks that are both tightly balanced and optimal. Now, we can see from\nequation 5 that the balance of excitation and inhibition coincides with the optimisation of our loss\nfunction (Eqn. 6). This is an important result, because it relates balanced network dynamics to a\nneural computation. Speci\ufb01cally, it allows us to interpret the spiking activity of our tightly balanced\nnetwork as an algorithm that optimises a loss function (Eqn. 6).\nThis is interesting because this optimisation can be easily mapped onto many useful computations.\nA particularly interesting example is given by \u2126 = \u2212FFT \u2212 \u03b2I, where I is the identity matrix [15,\n17, 18]. In recent work, it was shown that this connectivity can be learnt using a spike timing-\ndependent plasticity rule [15]. Here, we use this connectivity to rewrite our loss function (Eqn. 6)\nas follows:\n\nE = (x \u2212 \u02c6x)2 + \u03b2\n\nr2\ni ,\n\n(7)\n\nN(cid:88)\n\ni=1\n\nwhere\n\n\u02c6x = FT r .\n\n(8)\nThe second term of equation 7 is a metabolic cost term that penalises neurons for spiking excessively,\nand the \ufb01rst term quanti\ufb01es the difference between the signal value x and a linear read-out, \u02c6x, where\n\u02c6x is computed using the linear decoder FT (Eqn. 8). Therefore, a network with this connectivity\nproduces spike trains that optimise equation 7, thereby producing an output \u02c6x that is close to the\nsignal value x. Throughout the remainder of this work, we will focus on optimal balanced networks\nwith this form of connectivity.\nWe illustrate the properties of this system by simulating a network of 30 neurons. We \ufb01nd that\nour network produces spike trains (Fig. 1 b, middle panel) that represent x with great accuracy,\nacross a broad range of signal values (Fig. 1 b, top panel). As expected, this optimal performance\ncoincides with a tight balance of excitation and inhibition (Fig. 1 b, bottom panel), reminiscent\nof cortical observations [14]. In this example, our network has been optimised to represent a 2-\ndimensional signal x = (x1, x2). We measure \ufb01ring rate tuning curves using a \ufb01xed value of x2\nwhile varying x1. We use this signal because it can produce interesting, non-linear tuning curves\n(Fig. 1 c), especially at signal values where neurons fall silent. In the next section, we will attempt\nto understand this tuning curve non-linearity by calculating \ufb01ring rates analytically.\n\n3 Firing rate analysis with quadratic programming\n\nOur goal is to calculate the \ufb01ring rates f of all the neurons in these tightly balanced network mod-\nels as a function of the network input, the recurrent network connectivity \u2126, and the feedforward\nconnectivity F. On the surface, this may seem to be a dif\ufb01cult problem, because individual neurons\nhave complicated non-linear integrate-and-\ufb01re dynamics and they interact strongly through network\nconnectivity. However, the loss function relationship that we developed above allows us now to\ncircumvent these problems.\nThere are many possible \ufb01ring rate measures used in experiments and theoretical studies. Usually, a\nbox-shaped temporal averaging window is used. We de\ufb01ne the \ufb01ring rate of a neuron to be:\n\n\u2212\u03bbt(cid:48)\n\nsk(t \u2212 t\n\n(cid:48)\n\n(cid:48)\n) dt\n\n.\n\n0\n\ne\n\nfk = \u03bb\n\n(9)\nThis is an exponentially weighted temporal average2, with timescale \u03bb\u22121. We have chosen this\ntemporal average because it matches the dynamics of synaptic \ufb01lters in our neural network (Eqn. 3),\n1 Suppose that our network optimises the following cost function: E(r) = \u2212rT \u2126r \u2212 2rT Fx + c + bT r,\nwhere b is a vector of positive linear weights. Then, we \ufb01nd that the optimal spiking thresholds for this network\nare given by Ti \u2261 (\u2212\u2126ii + bi)/2 \u2265 \u2212\u2126ii/2. Therefore, we can apply our techniques to all networks with\nthresholds Ti \u2265 \u2212\u2126ii/2.\n\n2In this case, the \ufb01ring rate timescale is very short, because \u03bb is the membrane potential leak. However, we\ncan easily generalise our framework so that this timescale can be as long as the slowest synaptic process [17, 18].\n\n(cid:90) \u221e\n\n4\n\n\fallowing us to write fi(t) = \u03bbri(t). Here, we need to multiply by \u03bb to ensure that our \ufb01ring rates\nare reported in units of spikes per second.\nWe can now calculate \ufb01ring rates using this relationship and by exploiting the algorithmic nature\nof tightly balanced networks. These networks produce spike trains that minimise our loss function\nE(r) (Eqn. 6). Therefore, the \ufb01ring rates of our network are those that minimise E(f /\u03bb), under the\nconstraint that \ufb01ring rates must be positive:\n\n{fi} = arg min\nfi\u22650\n\nE(f /\u03bb) .\n\n(10)\n\nThis \ufb01ring rate prediction is the solution to a constrained optimisation problem known as quadratic\nprogramming [19]. The optimisation is quadratic, because our loss function is a quadratic function\nof f, and it is constrained because \ufb01ring rates are positive valued quantities, by de\ufb01nition.\nWe illustrate this \ufb01ring rate prediction using a simple two-neuron network, with recurrent connectiv-\nity given by \u2126 = \u2212FT F\u2212 \u03b2I as before. We simulate this system and measure the spike-train \ufb01ring\nrates for both neurons (Fig. 2 a, left panel). We then use equation 10 to obtain a theoretical predic-\ntion for \ufb01ring rates. We \ufb01nd that our \ufb01ring rate prediction matches the spike-train measurement with\ngreat accuracy (Fig.2 a, middle panel and right panel).\nWe can now use our \ufb01ring rate solution to understand the relationship between \ufb01ring rates, input,\nconnectivity and function. When both neurons are active, we can solve equation 10 exactly, to see\nthat \ufb01ring rates are related to network connectivity according to f = \u2212\u03bb\u2126\u22121Fx. When one of the\nneurons becomes silent, the other neuron must compensate by adjusting its \ufb01ring rate slope. For\nexample, when neuron 1 becomes silent, we have f1 = 0 and the \ufb01ring rate of neuron 2 increases\n2 + \u03b2I), where F2 denotes the second row of F. Similarly, when neuron 2\nto f2 = \u03bbF2x/(F2FT\n\nFigure 2: Calculating \ufb01ring rates in a two-neuron example. (A) Tuning curve measurements are\nobtained from a simulation of a two-neuron network (left, top). The representation error E for\nthis network is given at each signal value x (left, bottom). Tuning curve predictions are obtained\nusing quadratic programming (middle, top), with predicted representation error E (middle, bottom).\nPredicted \ufb01ring rates closely match measured \ufb01ring rates for both neurons, and for all signal values\n(right). (B) A phase diagram of the network activity during a simulation (left panel). Firing rates\nevolve from a silent state towards the minimum of the cost function E(x1 = 0) (red cross, left\npanel). Here, they \ufb02uctuate about the minimum, increasing in discrete steps of size \u03bb and decreasing\nexponentially (left panel, inset).We also measure the \ufb01ring rate trajectory (right panel) as the network\nevolves towards the minimum of the cost function E(x1 = 1) (blue cross, right panel), where neuron\n2 is silent.\n\n5\n\n(A) prediction\t\r \u00a0f1 (Hz) f2 (Hz) f1 (Hz) f2 (Hz) f1 (Hz) f2 (Hz) (B) simulation measurement \fbecomes silent, we have f2 = 0, and the \ufb01ring rate of neuron 1 increases to f1 = \u03bbF1x/(F1FT\n1 +\n\u03b2I), where F1 is the \ufb01rst row of F. This non-linear change in \ufb01ring rates is caused by the positivity\nconstraint. It can be understood functionally, as an attempt by the network to represent x accurately,\nwithin the constraints of the system.\nIn larger networks, our \ufb01ring rate prediction is more dif\ufb01cult to write down analytically because there\nare so many interactions between individual neurons and the positivity constraint. Nonetheless, we\ncan make a number of general observations about tuning curve shape. In general, we can interpret\ntuning curve shape to be the solution of a quadratic programming problem, which can be written as\na piece-wise linear function f = M (x) \u00b7 x, where M(x) is a matrix whose entries depend on the\nregion of signal space occupied by x. For example, in the two-neuron system that we just discussed,\nthe signal space is partitioned into three regions: one region where neuron 1 is active and where\nneuron 2 is silent, a second region where both neurons are active and a third region where neuron\n1 is silent and neuron 2 is active (Fig. 2 a, left panel). In each region there is a different linear\nrelationship between the signal and the \ufb01ring rates. The boundaries of these regions occur at points\nin signal space where an active neuron becomes silent (or where a silent neuron becomes active). At\nmost, there will be N + 1 such regions.\nWe can also use quadratic programming to describe the spiking dynamics underlying these non-\nlinear networks. Returning to our two-neuron example, we measure the temporal evolution of the\n\ufb01ring rates f1 and f2. We \ufb01nd that if we initialise the network to a sub-optimal state, the \ufb01ring rates\nrapidly evolve toward the optimum in a series of discrete steps of size \u03bb (Fig. 2 b, left panel). The\nstep-size is \u03bb because when neuron i spikes, ri \u2192 ri + 1, according to equation 3, and therefore,\nfi \u2192 fi+\u03bb, according to equation 9. Once the network has reached the optimal state, it is impossible\nfor it to remain there. The \ufb01ring rates begin to decay exponentially, because our \ufb01ring rate de\ufb01nition\nis an exponentially weighted summation (Eqn. 9) (Fig. 2 b, middle panel). Eventually, when the\n\ufb01ring rate has decayed too far from the optimal solution, another spike is \ufb01red and the network moves\ncloser to the optimum. In this way, spiking dynamics can be interpreted as a quadratic programming\nalgorithm. The \ufb01ring rate continues to \ufb02uctuate around the optimal spiking value. These \ufb02uctuations\nare noisy, in that they are dependent on initial conditions of the network. However, this noise has an\nunusual algorithmic structure that it is not well characterised by standard probabilistic descriptions\nof spiking irregularity.\n\n4 Analysing tuning curve shape with quadratic programming\n\nNow that we have a framework for relating \ufb01ring rates to network connectivity and input, we can\nexplore the computational function of tuning curve shapes and the network mechanisms that gener-\nate these tuning curves. We will investigate systems that have monotonic tuning curves and systems\nthat have bump-shaped tuning curves, which together constitute a large proportion of \ufb01ring rate\nobservations [2, 3, 4, 5].\nWe begin by considering a system of monotonic tuning curves, similar to the examples that we have\nconsidered already where recurrent connectivity is given by \u2126 = \u2212FFT \u2212 \u03b2I. In these systems,\nthe recurrent connectivity and hence the tuning curve shape is largely determined by the form of the\nfeedforward matrix F. This matrix also determines the contribution of tuning curves to computa-\ntional function, through its role as a linear decoder for signal representation (Eqn. 8). We illustrate\nthis by simulating the response of our network to a 2-dimensional signal x = (x1, x2), where x1\nis varied and x2 is \ufb01xed, using three different con\ufb01gurations of F (Fig. 3). This system produces\nmonotonically increasing and decreasing tuning curves (Fig. 3a). We \ufb01nd that neurons with positive\nvalues of F have positive \ufb01ring rate slopes (Fig. 3, blue tuning curves), and neurons with negative\nF values have negative \ufb01ring rate slopes (Fig. 3, red tuning curves). If the values of F are regularly\nspaced, then the tuning curves of individual neurons are regularly spaced, and, if we manipulate this\nregularity by adding some random noise to the connectivity, we obtain inhomogeneous and highly\nirregular tuning curves (Fig.3 b). This inhomogeneity has little effect on the representation error.\nThis inhomogeneous monotonic tuning is reminiscent of tuning in many neural systems, including\nthe oculomotor system [4]. The oculomotor system represents eye position, using neurons with\nnegative slopes to represent left side eye positions and neurons with positive slopes to represent\nright side eye positions. To relate our model to this system, the signal variable x1 can be interpreted\nas eye-position, with zero representing the central eye position, and with positive and negative values\n\n6\n\n\fFigure 3: The relationship between \ufb01ring rates, stimulus and connectivity in a network of 16 neurons.\n(A) Each dot represents the contribution of a neuron to a signal representation (when the \ufb01ring rate\nis 10 \u00d7 16 Hz) (1st column). Here, we consider signals along a straight line (thin black line). We\nsimulate a network of neurons and measure \ufb01ring rates (2nd column). These measurements closely\nmatch our algorithmically predicted \ufb01ring rates (3rd column), where each point in the 4th column\nrepresents the \ufb01ring rate of an individual neuron for a given stimulus. (B) Similar to \u2019(A)\u2019 except\nthat some noise is added to the connectivity. The representation error (bottom panels, column 2\nand column 3) is similar to the network without connectivity noise. (C) Similar to \u2019(B)\u2019, except\nthat we consider signals along a circle (thin black line). Each dot represents the contribution of a\nneuron to a signal representation (when the \ufb01ring rate is 20 \u00d7 16 Hz) (1st column). This signal\nproduces bump-shaped tuning curves (2nd column), which we can also predict accurately (3rd and\n4th column).\n\nFigure 4: Performance of quadratic programming in \ufb01ring rate prediction. (A) The mean prediction\nerror (absolute difference between each prediction and measurement, averaged over neurons and\nover 0.5 seconds) increases with \u03bb (bottom line). The standard deviation of the prediction becomes\nmuch larger with \u03bb (top line). (B) The mean prediction error (bottom line) and standard deviation of\nthe prediction error (top line) also increase with noise. However, the prediction error remains less\nthat 1 Hz.\n\n7\n\n(A) (B) (C) 0 \u03c0 -\u03c0 0 \u03c0 -\u03c0 \u03d1\t\r \u00a0\u03d1\t\r \u00a00 \u03c0 -\u03c0 0 \u03c0 -\u03c0 \u03d1\t\r \u00a0\u03d1\t\r \u00a0simulation measurement prediction\t\r \u00a0(A) (B) \u2318leak membrane potential noise \fof x1 representing right and left side eye positions, respectively. Now, we can use the relationship\nthat we have developed between tuning curves and computational function to interpret oculomotor\ntuning as an attempt to represent eye positions optimally.\nBump-shaped tuning curves can be produced by networks representing circular variables x1 = cos \u03b8,\nx2 = sin \u03b8, where \u03b8 is the orientation of the signal (Fig. 3 c). As before, the tuning curves of\nindividual neurons are regularly spaced if the values of F are regularly spaced. If we add some\nnoise to the connectivity F, the tuning curves become inhomogeneous and highly irregular. Again,\nthis inhomogeneity has little effect on the signal representation error.\nIn all the above examples, our \ufb01ring rate predictions closely match \ufb01ring rate measurements from\nnetwork simulations (Fig. 3). The success of our algorithmic approach in calculating \ufb01ring rates\ndepends on the success of spiking networks in algorithmically optimising a cost function. The\nresolution of this spiking algorithm is determined by the leak \u03bb and membrane potential noise. If\n\u03bb is large, the \ufb01ring rate prediction error will have large \ufb02uctuations about the optimal \ufb01ring rate\nvalue (Fig. 4 a). However, the average prediction error (averaged over time and neurons) remains\nsmall. Similarly, membrane potential noise3 increases \ufb02uctuations about the optimal \ufb01ring rate but\nthe average prediction error remains small (until the noise is large enough to generate spikes without\nany input) (Fig. 4 b).\n\n5 Discussion and Conclusions\n\nWe have developed a new algorithmic technique for calculating \ufb01ring rates in tightly balanced net-\nworks. Our approach does not require us to make any linearising approximations. Rather, we di-\nrectly identify the non-linear relationship between \ufb01ring rates, connectivity, input and optimal signal\nrepresentation. Identifying such relationships is a long-standing problem in systems neuroscience,\nlargely because the mathematical language that we use to describe information representation is\nvery different to the language that we use to describe neural network spiking statistics. For tightly\nbalanced networks, we have essentially solved this problem, by matching the \ufb01ring rate statistics of\nneural activity to the structure of neural signal representation. The non-linear relationship that we\nidentify is the solution to a quadratic programming problem.\nPrevious studies have also interpreted \ufb01ring rates to be the result of a constrained optimisation\nproblem [21], but for a population coding model, not for a network of spiking neurons. In a more\nrecent study, a spiking network was used to solve an optimisation problem, although this network\nrequired positive and negative spikes, which is dif\ufb01cult to reconcile with biological spiking [22].\nThe \ufb01ring rate tuning curves that we calculate have allowed us to investigate poorly understood\nfeatures of experimentally recorded tuning curves.\nIn particular, we have been able to evaluate\nthe impact of tuning curve inhomogeneity on neural computation. This inhomogeneity often goes\nunreported in experimental studies because it is dif\ufb01culty to interpret [6], and in theoretical studies, it\nis often treated as a form of noise that must be averaged out. We \ufb01nd that tuning curve inhomogeneity\nis not necessarily noise because it does not necessarily harm signal representation. Therefore, we\npropose that tuning curves are inhomogeneous simply because they can be.\nBeyond the interpretation of tuning curve shape, our quadratic programming approach to \ufb01ring rate\ncalculations promises to be useful in other areas of neuroscience - from data analysis, where it may\nbe possible to train our framework using neural data so as to predict \ufb01ring rate responses to sensory\nstimuli - to the study of computational neurodegeneration, where the impact of neural damage on\ntuning curves and computation may be characterised.\n\nAcknowledgements\n\nWe would like to thank Nuno Calaim for helpful comments on the manuscript. Also, we are grate-\nful for generous funding from the Emmy-Noether grant of the Deutsche Forschungs-gemeinschaft\n(CKM) and the Chaire dexcellence of the Agence National de la Recherche (CKM, DB), as well as\na James Mcdonnell Foundation Award (SD) and EU grants BACS FP6-IST-027140, BIND MECT-\nCT-20095-024831, and ERC FP7-PREDSPIKE (SD).\n\n3Membrane potential noise can be included in our network model by adding a Wiener process noise term to\n\nour membrane potential equation (Eqn. 1). We parametrise this noise with a constant \u03b7.\n\n8\n\n\fReferences\n[1] Adrian, E.D. and Zotterman, Y. (1926) The impulses produced by sensory nerve endings. The\n\nJournal of physiology 49(61): 156-193\n\n[2] Wohrer A., Humphries M.D. and Machens C.K. (2012) Population-wide distributions of neural\n\nactivity during perceptual decision-making. Progress in neurobiology 103: 156-193\n\n[3] Sclar, G. and Freeman, R.D. (1982) Orientation selectivity in the cat\u2019s striate cortex is invariant\n\nwith stimulus contrast. Experimental brain research 46(3): 457-61.\n\n[4] Aksay E., Olasagasti I., Mensh B.D., Baker R., Goldman, M.S. and Tank, D.W. (2007) Func-\n\ntional dissection of circuitry in a neural integrator. Nature neuroscience 10(4): 494-504.\n\n[5] Hubel D.H. and Wiesel T.N.\n\n(1962) Receptive \ufb01elds, binocular interaction and functional\n\narchitecture in the cat\u2019s visual cortex. Physiological Soc 1:(160)\n\n[6] Olshausen B.A. and Field D.J. (2005) How close are we to understanding V1? Neural compu-\n\ntation 8(17): 470-3.\n\n[7] Aertsen A., Johannesma P.I.M. and Hermes D.J. (1980) Spectro-temporal receptive \ufb01elds of\n\nauditory neurons in the grassfrog. Biological Cybernetics\n\n[8] Machens C.K., Wehr M.S. and Zador A.M. (2004) Linearity of cortical receptive \ufb01elds mea-\nsured with natural sounds. The Journal of neuroscience : the of\ufb01cial journal of the Society for\nNeuroscience 5(24): 1089-100.\n\n[9] Ginzburg I. and Sompolinsky H. (1994) Theory of correlations in stochastic neural networks.\n\nPhysical Review E 4(50): 3171-3191.\n\n[10] Trousdale J., Hu Y., Shea-Brown E. and Josi\u00b4c K.\n\n(2012) Impact of network structure and\n\ncellular response on spike time correlations. PLoS computational biology 3(8): e1002408\n\n[11] Beck J., Bejjanki V.R. and Pouget A. (2011) Insights from a simple expression for linear \ufb01sher\ninformation in a recurrently connected population of spiking neurons. Neural computation\n6(23): 1484-502\n\n[12] van Vreeswijk C. and Sompolinsky H.\n\n(1996) Chaos in neuronal networks with balanced\n\nexcitatory and inhibitory activity. Neural computation 5293(274): 1724-1726\n\n[13] van Vreeswijk C. and Sompolinsky H. (1998) Chaotic balanced state in a model of cortical\n\ncircuits. Neural computation 6(10): 1321-1371\n\n[14] Haider, B., Duque, A., Hasenstaub, A.R. and McCormick, D.A. (2006) Neocortical network\nactivity in vivo is generated through a dynamic balance of excitation and inhibition. The Jour-\nnal of neuroscience : the of\ufb01cial journal of the Society for Neuroscience 17(26): 4535-45\n\n[15] Bourdoukan R., Barrett D.G.T., Machens C. and Deneve S. (2012) Learning optimal spike-\n\nbased representations Advances in Neural Information Processing Systems 25: 2294-2302.\n\n[16] Knight B.W. (1972) Dynamics of encoding in a population of neurons. The Journal of general\n\nphysiology 6(59): 734-66\n\n[17] Boerlin M., Machens, C.K. and Deneve S. (2012) Predictive coding of dynamical variables in\n\nbalanced spiking networks. PLoS computational biology, in press.\n\n[18] Boerlin M., Deneve S. (2011) Spike-based population coding and working memory. PLoS\n\nComput Biol 7, e1001080.\n\n[19] Boyd S. and Vandenberghe L. (2004) Convex optimization.\n[20] Braitenber V. and Schuz A. (1991) Anatomy of the cortex. Statistics and Geometry. Springer\n[21] Salinas E. (2006) How behavioral constraints may determine optimal sensory representations\n\nPLoS biolog 12(4): 1545-7885\n\n[22] Rozell C.J., Johnson D.H., Baraniuk R.G. and Olshausen B.A. (2011) Spike-based population\n\ncoding and working memory. PLoS Comput Biol 7, e1001080.\n\n9\n\n\f", "award": [], "sourceid": 767, "authors": [{"given_name": "David", "family_name": "Barrett", "institution": "\u00c9cole Normale Sup\u00e9rieure"}, {"given_name": "Sophie", "family_name": "Den\u00e8ve", "institution": "\u00c9cole Normale Sup\u00e9rieure"}, {"given_name": "Christian", "family_name": "Machens", "institution": "Champalimaud Centre for the Unknown"}]}