{"title": "Identifying Dendritic Processing", "book": "Advances in Neural Information Processing Systems", "page_first": 1261, "page_last": 1269, "abstract": "In system identification both the input and the output of a system are available to an observer and an algorithm is sought to identify parameters of a hypothesized model of that system. Here we present a novel formal methodology for identifying dendritic processing in a neural circuit consisting of a linear dendritic processing filter in cascade with a spiking neuron model. The input to the circuit is an analog signal that belongs to the space of bandlimited functions. The output is a time sequence associated with the spike train. We derive an algorithm for identification of the dendritic processing filter and reconstruct its kernel with arbitrary precision.", "full_text": "Identifying Dendritic Processing\n\nDepartment of Electrical Engineering\n\nDepartment of Electrical Engineering\n\nYevgeniy B. Slutskiy\u2217\n\nColumbia University\nNew York, NY 10027\n\nys2146@columbia.edu\n\nAurel A. Lazar\n\nColumbia University\nNew York, NY 10027\n\naurel@ee.columbia.edu\n\nAbstract\n\nIn system identi\ufb01cation both the input and the output of a system are available to\nan observer and an algorithm is sought to identify parameters of a hypothesized\nmodel of that system. Here we present a novel formal methodology for identifying\ndendritic processing in a neural circuit consisting of a linear dendritic processing\n\ufb01lter in cascade with a spiking neuron model. The input to the circuit is an analog\nsignal that belongs to the space of bandlimited functions. The output is a time\nsequence associated with the spike train. We derive an algorithm for identi\ufb01cation\nof the dendritic processing \ufb01lter and reconstruct its kernel with arbitrary precision.\n\n1\n\nIntroduction\n\nThe nature of encoding and processing of sensory information in the visual, auditory and olfactory\nsystems has been extensively investigated in the systems neuroscience literature. Many phenomeno-\nlogical [1, 2, 3] as well as mechanistic [4, 5, 6] models have been proposed to characterize and\nclarify the representation of sensory information on the level of single neurons.\nHere we investigate a class of phenomenological neural circuit models in which the time-domain\nlinear processing takes place in the dendritic tree and the resulting aggregate dendritic current is en-\ncoded in the spike domain by a spiking neuron. In block diagram form, these neural circuit models\nare of the [Filter]-[Spiking Neuron] type and as such represent a fundamental departure from the\nstandard Linear-Nonlinear-Poisson (LNP) model that has been used to characterize neurons in many\nsensory systems, including vision [3, 7, 8], audition [2, 9] and olfaction [1, 10]. While the LNP\nmodel also includes a linear processing stage, it describes spike generation using an inhomogeneous\nPoisson process. In contrast, the [Filter]-[Spiking Neuron] model incorporates the temporal dynam-\nics of spike generation and allows one to consider more biologically-plausible spike generators.\nWe perform identi\ufb01cation of dendritic processing in the [Filter]-[Spiking Neuron] model assuming\nthat input signals belong to the space of bandlimited functions, a class of functions that closely\nmodel natural stimuli in sensory systems. Under this assumption, we show that the identi\ufb01cation of\ndendritic processing in the above neural circuit becomes mathematically tractable. Using simulated\ndata, we demonstrate that under certain conditions it is possible to identify the impulse response of\nthe dendritic processing \ufb01lter with arbitrary precision. Furthermore, we show that the identi\ufb01cation\nresults fundamentally depend on the bandwidth of test stimuli.\nThe paper is organized as follows. The phenomenological neural circuit model and the identi\ufb01cation\nproblem are formally stated in section 2. The Neural Identi\ufb01cation Machine and its realization as an\nalgorithm for identifying dendritic processing is extensively discussed in section 3. Performance of\nthe identi\ufb01cation algorithm is exempli\ufb01ed in section 4. Finally, section 5 concludes our work.\n\n\u2217The names of the authors are alphabetically ordered.\n\n1\n\n\f2 Problem Statement\n\nIn what follows we assume that the dendritic processing is linear [11] and any nonlinear effects arise\nas a result of the spike generation mechanism [12]. We use linear BIBO-stable \ufb01lters (not necessarily\ncausal) to describe the computation performed by the dendritic tree. Furthermore, a spiking neuron\nmodel (as opposed to a rate model) is used to model the generation of action potentials or spikes.\nWe investigate a general neural circuit comprised of a \ufb01lter in cascade with a spiking neuron model\n(Fig. 1(a)). This circuit is an instance of a Time Encoding Machine (TEM), a nonlinear asyn-\nchronous circuit that encodes analog signals in the time domain [13, 14]. Examples of spiking\nneuron models considered in this paper include the ideal IAF neuron, the leaky IAF neuron and\nthe threshold-and-feedback (TAF) neuron [15]. However, the methodology developed below can be\nextended to many other spiking neuron models as well.\nWe break down the full identi\ufb01cation of this circuit into two problems: (i) identi\ufb01cation of linear op-\nerations in the dendritic tree and (ii) identi\ufb01cation of spike generator parameters. First, we consider\nproblem (i) and assume that parameters of the spike generator can be obtained through biophysical\nexperiments. Then we show how to address (ii) by exploring the space of input signals. We consider\na speci\ufb01c example of a neural circuit in Fig. 1(a) and carry out a full identi\ufb01cation of that circuit.\n\n(a)\n\n(b)\n\nFigure 1: Problem setup. (a) The dendritic processing is described by a linear \ufb01lter and spikes are produced\nby a (nonlinear) spiking neuron model. (b) An example of a neural circuit in (a) is a linear \ufb01lter in cascade with\nthe ideal IAF neuron. An input signal u is \ufb01rst passed through a \ufb01lter with an impulse response h. The output\nof the \ufb01lter v(t) = (u \u2217 h)(t), t \u2208 R, is then encoded into a time sequence (tk)k\u2208Z by the ideal IAF neuron.\n\n3 Neuron Identi\ufb01cation Machines\n\nA Neuron Identi\ufb01cation Machine (NIM) is the realization of an algorithm for the identi\ufb01cation of\nthe dendritic processing \ufb01lter in cascade with a spiking neuron model. First, we introduce several\nde\ufb01nitions needed to formally address the problem of identifying dendritic processing. We then con-\nsider the [Filter]-[Ideal IAF] neural circuit. We derive an algorithm for a perfect identi\ufb01cation of the\nimpulse response of the \ufb01lter and provide conditions for the identi\ufb01cation with arbitrary precision.\nFinally, we extend our results to the [Filter]-[Leaky IAF] and [Filter]-[TAF] neural circuits.\n\n3.1 Preliminaries\nWe model signals u = u(t), t \u2208 R, at the input to a neural circuit as elements of the Paley-Wiener\nspace \u039e =(cid:8)u \u2208 L2(R)(cid:12)(cid:12) supp (Fu) \u2286 [\u2212\u2126, \u2126](cid:9), i.e., as functions of \ufb01nite energy having a \ufb01nite\nspectral support (F denotes the Fourier transform). Furthermore, we assume that the dendritic\nprocessing \ufb01lters h = h(t), t \u2208 R, are linear, BIBO-stable and have a \ufb01nite temporal support, i.e.,\nthey belong to the space H =(cid:8)h \u2208 L1(R)(cid:12)(cid:12) supp(h) \u2286 [T1, T2](cid:9).\nDe\ufb01nition 1. A signal u \u2208 \u039e at the input to a neural circuit together with the resulting output\nT = (tk)k\u2208Z of that circuit is called an input/output (I/O) pair and is denoted by (u, T).\nDe\ufb01nition 2. Two neural circuits are said to be \u039e-I/O-equivalent if their respective I/O pairs are\nidentical for all u \u2208 \u039e.\nDe\ufb01nition 3. Let P : H \u2192 \u039e with (Ph)(t) = (h \u2217 g)(t), where (h \u2217 g) denotes the convolution of\nh with the sinc kernel g (cid:44) sin(\u2126t)/(\u03c0t), t \u2208 R. We say that Ph is the projection of h onto \u039e.\nDe\ufb01nition 4. Signals {ui}N\ni=1 are said to be linearly independent if there do not exist real numbers\n{\u03b1i}N\n\ni=1, not all zero, and real numbers {\u03b2i}N\n\ni=1 \u03b1iui(t + \u03b2i) = 0.\n\ni=1 such that(cid:80)N\n\n2\n\nu(t)DendriticProcessingSpikeGenerationFilterSpikingLinearv(t)Neuron(tk)k\u2208Z+u(t)b\u03b4h(t)DendriticProcessingvoltageresetto0(tk)k\u2208ZSpikeGeneration:IdealIAFNeuron1C(cid:2)v(t)\f3.2 NIM for the [Filter]-[Ideal IAF] Neural Circuit\n\nAn example of a model circuit in Fig. 1(a) is the [Filter]-[Ideal IAF] circuit shown in Fig. 1(b).\nIn this circuit, an input signal u \u2208 \u039e is passed through a \ufb01lter with an impulse response (kernel)\nh \u2208 H and then encoded by an ideal IAF neuron with a bias b \u2208 R+, a capacitance C \u2208 R+ and a\nthreshold \u03b4 \u2208 R+. The output of the circuit is a sequence of spike times (tk)k\u2208Z that is available to\nan observer. This neural circuit is an instance of a TEM and its operation can be described by a set\nof equations (formally known as the t-transform [13]):\n\n(cid:90) tk+1\n\ntk\n\n(u \u2217 h)(s)ds = qk,\n\nk \u2208 Z,\n\n(1)\n\nwhere qk (cid:44) C\u03b4\u2212b(tk+1\u2212tk). Intuitively, at every spike time tk+1 the ideal IAF neuron is providing\na measurement qk of the signal v(t) = (u \u2217 h)(t) on the interval t \u2208 [tk, tk+1].\nProposition 1. The left-hand side of the t-transform in (1) can be written as a bounded linear\nfunctional Lk : \u039e \u2192 R with Lk(Ph) = (cid:10)\u03c6k,Ph(cid:11), where \u03c6k(t) = (cid:0)1[tk, tk+1] \u2217 \u02dcu(cid:1) (t) and \u02dcu =\nu(\u2212t), t \u2208 R, denotes the involution of u.\nProof: Since (u\u2217h) \u2208 \u039e, we have (u\u2217h)(t) = (u\u2217h\u2217g)(t), t \u2208 R, and therefore(cid:82) tk+1\n(cid:82) tk+1\nis a bounded linear functional Lk : \u039e \u2192 R with\nLk(Ph) =(cid:90) tk+1\n\n(u\u2217h)(s)ds =\n(u\u2217Ph)(s)ds. Now since Ph is bounded, the expression on the right-hand side of the equality\n\n(u \u2217 Ph)(s)ds =(cid:10)\u03c6k,Ph(cid:11),\n\nwhere \u03c6k \u2208 \u039e and the last equality follows from the Riesz representation theorem [16]. To \ufb01nd \u03c6k,\nwe use the fact that \u039e is a Reproducing Kernel Hilbert Space (RKHS) [17] with a kernel K(s, t) =\ng(t \u2212 s). By the reproducing property of the kernel [17], we have \u03c6k(t) =(cid:10)\u03c6k, Kt(cid:11) = Lk(Kt).\nLetting \u02dcu = u(\u2212t) denote the involution of u and using (2), we obtain\n\n(2)\n\ntk\n\ntk\n\ntk\n\n\u03c6k(t) =(cid:10)1[tk, tk+1] \u2217 \u02dcu, Kt(cid:11) =(cid:0)1[tk, tk+1] \u2217 \u02dcu(cid:1) (t).\n\n(cid:3)\n\nck\u03c8k(t),\n\nProposition 1 effectively states that the measurements (qk)k\u2208Z of v(t) = (u \u2217 h)(t) can be\nalso interpreted as the measurements of (Ph)(t). A natural question then is how to identify Ph\nfrom (qk)k\u2208Z. To that end, we note that an observer can typically record both the input u = u(t),\nt \u2208 R and the output T = (tk)k\u2208Z of a neural circuit. Since (qk)k\u2208Z can be evaluated from (tk)k\u2208Z\nusing the de\ufb01nition of qk in (1), the problem is reduced to identifying Ph from an I/O pair (u, T).\nTheorem 1. Let u be bounded with supp(Fu) = [\u2212\u2126, \u2126], h \u2208 H and b/(C\u03b4) > \u2126/\u03c0. Then given\nan I/O pair (u, T) of the [Filter]-[Ideal IAF] neural circuit, Ph can be perfectly identi\ufb01ed as\n\n(Ph)(t) =(cid:88)k\u2208Z\nu(s \u2212 tk)ds for all k, l \u2208 Z, and [q]l = C\u03b4 \u2212 b(tl+1 \u2212 tl).\n\nwhere \u03c8k(t) = g(t \u2212 tk), t \u2208 R. Furthermore, c = G+q with G+ denoting the Moore-Penrose\npseudoinverse of G, [G]lk =(cid:82) tl+1\nProof: By appropriately bounding the input signal u, the spike density (the average number of spikes\nover arbitrarily long time intervals) of an ideal IAF neuron is given by D = b/(C\u03b4) [14]. Therefore,\nfor D > \u2126/\u03c0 the set of the representation functions (\u03c8k)k\u2208Z, \u03c8k(t) = g(t \u2212 tk), is a frame in \u039e\n[18] and (Ph)(t) =(cid:80)k\u2208Z ck\u03c8k(t). To \ufb01nd the coef\ufb01cients ck we note from (2) that\nck(cid:10)\u03c6l, \u03c8k(cid:11) =(cid:88)k\u2208Z\nwhere [G]lk =(cid:10)\u03c6l, \u03c8k(cid:11) =(cid:10)1[tl, tl+1] \u2217 \u02dcu, g(\u00b7 \u2212 tk)(cid:11) =(cid:82) tl+1\nu(s \u2212 tk)ds. Writing (3) in matrix\nform, we obtain q = Gc with [q]l = ql and [c]k = ck. Finally, the coef\ufb01cients ck, k \u2208 Z, can be\n(cid:3)\ncomputed as c = G+q.\n\nql =(cid:10)\u03c6l,Ph(cid:11) =(cid:88)k\u2208Z\n\n[G]lkck,\n\n(3)\n\ntl\n\ntl\n\n3\n\n\fRemark 1. The condition b/(C\u03b4) > \u2126/\u03c0 in Theorem 1 is a Nyquist-type rate condition. Thus,\nperfect identi\ufb01cation of the projection of h onto \u039e can be achieved for a \ufb01nite average spike rate.\nRemark 2. Ideally, we would like to identify the kernel h \u2208 H of the \ufb01lter in cascade with the ideal\nIAF neuron. Note that unlike h, the projection Ph belongs to the space L2(R), i.e., in general Ph\nis not BIBO-stable and does not have a \ufb01nite temporal support. Nevertheless, it is easy to show that\n(Ph)(t) approximates h(t) arbitrarily closely on t \u2208 [T1, T2], provided that the bandwidth \u2126 of u\nis suf\ufb01ciently large.\nRemark 3. If the impulse response h(t) = \u03b4(t), i.e., if there is no processing on the (arbitrary)\ninput signal u(t), then ql =(cid:82) tl+1\n(cid:90) tl+1\n(u \u2217 Ph)(s)ds =(cid:90) tl+1\nl \u2208 Z.\nThe above holds if and only if (Ph)(t) = g(t), t \u2208 R. In other words, if h(t) = \u03b4(t), then we\nidentify P\u03b4(t) = sin(\u2126t)/(\u03c0t), the projection of \u03b4(t) onto \u039e.\nCorollary 1. Let u be bounded with supp(Fu) = [\u2212\u2126, \u2126], h \u2208 H and b\n\u03c0 . Furthermore, let\nW = (\u03c41, \u03c42) so that (\u03c42 \u2212 \u03c41) > (T2 \u2212 T1) and let \u03c4 = (\u03c41 + \u03c42)/2, T = (T1 + T2)/2. Then\ngiven an I/O pair (u, T) of the [Filter]-[Ideal IAF] neural circuit, (Ph)(t) can be approximated\narbitrarily closely on t \u2208 [T1, T2] by\n\nu(s)ds, l \u2208 Z. Furthermore,\nu(s)ds =(cid:90) tl+1\n(u \u2217 g)(s)ds,\n\n(u \u2217 h)(s)ds =(cid:82) tl+1\n(u \u2217 h)(s)ds =(cid:90) tl+1\n\nC\u03b4 > \u2126\n\ntl\n\ntl\n\ntl\n\ntl\n\ntl\n\ntl\n\n\u02c6h(t) = (cid:88)k: tk\u2208W\n\nck\u03c8k(t),\n\ntl\n\nu(s \u2212 (tk \u2212 \u03c4 + T ))ds and\n\nwhere \u03c8k(t) = g(t \u2212 (tk \u2212 \u03c4 + T )), c = G+q, [G]lk = (cid:82) tl+1\n[q]l = C\u03b4 \u2212 b(tl+1 \u2212 tl) for all k, l \u2208 Z, provided that |\u03c41| and |\u03c42| are suf\ufb01ciently large.\nProof: Through a change of coordinates t \u2192 t(cid:48) = (t \u2212 \u03c4 + T ) illustrated in Fig. 2, we obtain\nW (cid:48) = [\u03c41 \u2212 \u03c4 + T, \u03c42 \u2212 \u03c4 + T ] \u2283 [T1, T2] and the set of spike times (tk \u2212 \u03c4 + T )k: tk\u2208W . Note\nthat W (cid:48) \u2192 R as (\u03c42 \u2212 \u03c41) \u2192 \u221e. The rest of the proof follows from Theorem 1 and the fact that\n(cid:3)\nlimt\u2192\u00b1\u221e g(t) = 0.\nFrom Corollary 1 we see that if the [Filter]-[Ideal IAF] neural circuit is producing spikes\nwith a spike density above the Nyquist rate, then we can use a set of spike times (tk)k: tk\u2208W from a\nsingle temporal window W to identify (Ph)(t) to an arbitrary precision on [T1, T2].\nThis result is not surprising. Since the spike density is above the Nyquist rate, we could have also\nused a canonical time decoding machine (TDM) [13] to \ufb01rst perfectly recover the \ufb01lter output v(t)\nand then employ one of the widely available LTI system techniques to estimate (Ph)(t).\nHowever, the problem becomes much more dif\ufb01cult if the spike density is below the Nyquist rate.\n\n(a)\n\n(b)\n\nFigure 2: Change of coordinates in Corollary 1. (a) Top: example of a causal impulse response h(t) with\nsupp(h) = [T1, T2], T1 = 0. Middle: projection Ph of h onto some \u039e. Note that Ph is not causal and\nsupp(Ph) = R. Bottom: h(t) and (Ph)(t) are plotted on the same set of axes. (b) Top: an input signal\nu(t) with supp(Fu) = [\u2212\u2126, \u2126]. Middle: only red spikes from a temporal window W = (\u03c41, \u03c42) are used to\nconstruct \u02c6h(t). Bottom: Ph is approximated by \u02c6h(t) on t \u2208 [T1, T2] using spike times (tk \u2212 \u03c4 + T )k:tk\u2208W .\n\n4\n\n0tttT20T20T2(Ph)(t)h(t)h(t)(Ph)(t)u(t)0\u02c6h(t(cid:2))\u03c400(tk)k\u2208ZttT2t(cid:2)\u03c41\u2212\u03c4+T\u03c42\u2212\u03c4+T\u03c41\u03c42T2T2W(cid:2)W\fTheorem 2. (The Neuron Identi\ufb01cation Machine) Let {ui | supp(Fui) = [\u2212\u2126, \u2126]}N\ni=1 be a col-\nlection of N linearly independent and bounded stimuli at the input to a [Filter]-[Ideal IAF] neural\ncircuit with a dendritic processing \ufb01lter h \u2208 H. Furthermore, let Ti = (ti\nk)k\u2208Z denote the output of\nthe neural circuit in response to the bounded input signal ui. If(cid:80)N\n\u03c0 , then (Ph)(t) can\nC\u03b4 > \u2126\nbe identi\ufb01ed perfectly from the collection of I/O pairs {(ui, Ti)}N\ni=1.\nProof: Consider the SIMO TEM [14] depicted in Fig. 3(a). h(t) is the input to a population of N\n[Filter]-[Ideal IAF] neural circuits. The spikes (ti\nk)k\u2208Z at the output of each neural circuit represent\nk = (cid:10)\u03c6i\nk, Ph(cid:11) of (Ph)(t). Thus we can think of the qi\ndistinct measurements qi\nk\u2019s as projections\nk , . . . ). Since the \ufb01lters are linearly independent\nof Ph onto (\u03c61\n2, . . . , \u03c61\ni=1 are appropriately bounded and(cid:80)N\n[14], it follows that, if {ui}N\n\u03c0 or equivalently if the\n\u03c0D , the set of functions { (\u03c8j\nk), is\nnumber of neurons N > \u2126C\u03b4\na frame for \u039e [14], [18]. Hence\n\nC\u03b4 > \u2126\nj=1 with \u03c8j\n\nk(t) = g(t\u2212 tj\n\nk)k\u2208Z }N\n\nk, . . . , \u03c6N\n\n2 , . . . , \u03c6N\n\n\u03c0b = \u2126\n\n1 , \u03c6N\n\n1, \u03c61\n\nj=1\n\nj=1\n\nb\n\nb\n\nc1\n\nc2\n\nl ,\n\nqi\n\ncN\n\n(4)\n\nk(t).\n\nl, \u03c82\n\nl, \u03c81\n\nl, \u03c8j\n\nl, \u03c8N\n\ncj\nk\u03c8j\n\nl (t):\n\nl (t), \u03c62\n\nl (t), ..., \u03c6N\n\nk(cid:10)\u03c6i\n\n(Ph)(t) =\n\nk(cid:10)\u03c6i\n\nk(cid:11) \u2261 qi\n\nl, Ph(cid:11) =(cid:88)k\u2208Z\n(cid:10)\u03c6i\n\nl =(cid:88)k\u2208Z(cid:2)Gi1(cid:3)lk c1\n\nTo \ufb01nd the coef\ufb01cients ck, we take the inner product of (4) with \u03c61\n\nN(cid:88)j=1(cid:88)k\u2208Z\nk(cid:11) + \u00b7\u00b7\u00b7 + (cid:88)k\u2208Z\nk(cid:11) + (cid:88)k\u2208Z\nk(cid:10)\u03c6i\nk(cid:11), we obtain\nfor i = 1, . . . , N, l \u2208 Z. Letting [Gij]lk =(cid:10)\u03c6i\nk + (cid:88)k\u2208Z(cid:2)Gi2(cid:3)lk c2\nk + \u00b7\u00b7\u00b7 + (cid:88)k\u2208Z(cid:2)GiN(cid:3)lk cN\nfor i = 1, . . . , N, l \u2208 Z. Writing (5) in matrix form, we have q = Gc, where q = [q1, q2, . . . , qN ]T\nui(s \u2212 tj\nwith [qi]l = C\u03b4 \u2212 b(ti\nk)ds and c = [c1, c2, . . . , cN ]T . Finally,\n(cid:3)\nto \ufb01nd the coef\ufb01cients ck, k \u2208 Z, we compute c = G+q.\ni=1 as before, h \u2208 H and(cid:80)N\nCorollary 2. Let {ui}N\n\u03c0 . Furthermore, let W = (\u03c41, \u03c42) so\nthat (\u03c42 \u2212 \u03c41) > (T2 \u2212 T1) and let \u03c4 = (\u03c41 + \u03c42)/2, T = (T1 + T2)/2. Then given the I/O pairs\n{(ui, Ti)}N\ni=1 of the [Filter]-[Ideal IAF] neural circuit, (Ph)(t) can be approximated arbitrarily\nclosely on t \u2208 [T1, T2] by \u02c6h(t) =(cid:80)N\nk\u2208W cj\nk \u2212 \u03c4 + T )), c =\nG+q, with [Gij]lk =(cid:82) ti\nui(s\u2212(tj\nl+1\u2212ti\nl)\nfor all k, l \u2208 Z provided that |\u03c41| and |\u03c42| are suf\ufb01ciently large.\nProof: Similar to Corollary 1.\n\nj=1(cid:80)k: tj\nk\u2212\u03c4 +T ))ds, q = [q1, q2, . . . , qN ]T , [qi]l = C\u03b4\u2212b(ti\n\nl), [Gij]lk =(cid:82) ti\n\nk(t) = g(t\u2212 (tj\n\nk(t), where \u03c8j\n\nl+1 \u2212 ti\n\nC\u03b4 > \u2126\n\nk\u03c8j\n\n(5)\n\nk ,\n\nl+1\n\nti\nl\n\nl+1\n\nti\nl\n\nb\n\nj=1\n\n(cid:3)\n\ni=1\n\n1, \u03c4 i\n\n1 + \u03c4 i\n\n2(cid:1)(cid:9)N\n\nbe a collection of\n2)/2,\n2 \u2212 \u03c4 i\nk)k\u2208Z denote those spikes of the I/O pair (u, T) that belong to W i. Then\n\n1) > (T2 \u2212 T1), i = 1, 2, ..., N. Furthermore, let \u03c4 i = (\u03c4 i\n\nCorollary 3. Let supp(Fu) = [\u2212\u2126, \u2126], h \u2208 H and let(cid:8)W i (cid:44) (cid:0)\u03c4 i\nwindows of \ufb01xed length (\u03c4 i\nT = (T1 + T2)/2 and let (ti\nPh can be approximated arbitrarily closely on [T1, T2] by\nN(cid:88)j=1 (cid:88)k: tk\u2208W j\ncj\nk\u03c8j\n\nk(t) = g(t \u2212 (tj\n\nk \u2212 \u03c4 j + T )), c = G+q with [Gij]lk = (cid:82) ti\n\nwhere \u03c8j\nq = [q1, q2, . . . , qN ]T , [qi]l = C\u03b4 \u2212 b(ti\nnon-overlapping windows N is suf\ufb01ciently large.\nProof: The input signal u restricted, respectively, to the collection of intervals(cid:8)W i (cid:44)(cid:0)\u03c4 i\nplays the same role here as the test stimuli {ui}N\n\nk \u2212 \u03c4 j + T ))ds,\nl) for all k, l \u2208 Z, provided that the number of\n2(cid:1)(cid:9)N\ni=1 in Corollary 2. See also Remark 9 in [14]. (cid:3)\n\nu(s \u2212 (tj\n\nl+1 \u2212 ti\n\n\u02c6h(t) =\n\nk(t),\n\n1, \u03c4 i\n\ni=1\n\nti\nl\n\nl+1\n\n5\n\n\f(a)\n\n(b)\n\nFigure 3: The Neuron Identi\ufb01cation Machine. (a) SIMO TEM interpretation of the identi\ufb01cation problem\nwith (ti\n\nk) = (tk)k:tk\u2208W i, i = 1, 2, . . . , N. (b) Block diagram of the algorithm in Theorem 2.\n\nRemark 4. The methodology presented in Theorem 2 can easily be applied to other spiking neuron\nmodels. For example, for the leaky IAF neuron, we have\n\nl \u2212 ti\n\nRC (cid:33)(cid:35),\n\nl+1\n\nRC (cid:33)ds.\n[qi]l = C\u03b4 \u2212 bRC(cid:34)1 \u2212 exp(cid:32) ti\nui(cid:0)s \u2212 tj\nSimilarly, for a threshold-and-feedback (TAF) neuron [15] with a bias b \u2208 R+, a threshold \u03b4 \u2208 R+,\nand a causal feedback \ufb01lter with an impulse response f (t), t \u2208 R, we obtain\n[Gij]lk = ui(cid:0)ti\n\nk(cid:1) exp(cid:32)s \u2212 ti\nk(cid:1).\nl \u2212 tj\n\n[qi]l = \u03b4 \u2212 b +(cid:88)k 2\u03c0\u00b7100 rad/s, which is roughly the effective bandwidth of h.\n\n5 Conclusion\n\nPrevious work in system identi\ufb01cation of neural circuits (see [20] and references therein) calls for\nparameter identi\ufb01cation using white noise input stimuli. The identi\ufb01cation process for, e.g., the LNP\nmodel entails identi\ufb01cation of the linear \ufb01lter, followed by a \u2018best-of-\ufb01t\u2019 procedure to \ufb01nd the non-\nlinearity. The performance of such an identi\ufb01cation method has not been analytically characterized.\nIn our work, we presented the methodology for identifying dendritic processing in simple [Filter]-\n[Spiking Neuron] models from a single input stimulus. The discussed spiking neurons include the\nideal IAF neuron, the leaky IAF neuron and the threshold-and-\ufb01re neuron. However, the methods\npresented in this paper are applicable to many other spiking neuron models as well.\nThe algorithm of the Neuron Identi\ufb01cation Machine is based on the natural assumption that the den-\ndritic processing \ufb01lter has a \ufb01nite temporal support. Therefore, its action on the input stimulus can\nbe observed in non-overlapping temporal windows. The \ufb01lter is recovered with arbitrary precision\nfrom an input/output pair of a neural circuit, where the input is a single signal assumed to be ban-\ndlimited. Remarkably, the algorithm converges for a very small number of spikes. This should be\ncontrasted with the reverse correlation and spike-triggered average methods [20].\nFinally, the work presented here will be extended to spiking neurons with random parameters.\n\nAcknowledgement\n\nThe work presented here was supported by NIH under the grant number R01DC008701-01.\n\n8\n\n051015202530\u2212100\u221280\u221260\u221240\u221220020(a)MSE(\u02c6h,Ph)vs.thenumberoftemporalwindowsNumberofwindowsNMSE(\u02c6h,Ph),[dB] 102030405060708090100110120130140150\u221270\u221260\u221250\u221240\u221230\u221220\u2212100(b)MSE(\u02c6h,h)vs.theinputsignalbandwidthInputsignalbandwidth\u03a9/(2\u03c0),[Hz]MSE(\u02c6h,h),[dB] D=20HzD=40HzD=60HzD=60Hz,N=10 h\u02c6h h\u02c6h\u03a9/(\u03c0D1)\u03a9/(\u03c0D2)\u03a9/(\u03c0D3)\fReferences\n[1] Maria N. Geffen, Bede M. Broome, Gilles Laurent, and Markus Meister. Neural encoding of rapidly\n\n\ufb02uctuating odors. Neuron, 61(4):570\u2013586, 2009.\n\n[2] Sean J. Slee, Matthew H. Higgs, Adrienne L. Fairhall, and William J. Spain. Two-dimensional time\n\ncoding in the auditory brainstem. The Journal of Neuroscience, 25(43):9978\u20139988, October 2005.\n\n[3] Nicole C. Rust, Odelia Schwartz, J. Anthony Movshon, and Eero P. Simoncelli. Spatiotemporal elements\n\nof macaque V1 receptive \ufb01elds. Neuron, Vol. 46:945\u2013956, 2005.\n\n[4] Daniel P. Dougherty, Geraldine A. Wright, and Alice C. Yew. Computational model of the cAMP-\nmediated sensory response and calcium-dependent adaptation in vertebrate olfactory receptor neurons.\nProceedings of the National Academy of Sciences, 102(30):0415\u201310420, 2005.\n\n[5] Yuqiao Gu, Philippe Lucas, and Jean-Pierre Rospars. Computational model of the insect pheromone\n\ntransduction cascade. PLoS Computational Biology, 5(3), 2009.\n\n[6] Zhuoyi Song, Daniel Coca, Stephen Billings, Marten Postma, Roger C. Hardie, and Mikko Juusola.\nBiophysical Modeling of a Drosophila Photoreceptor. In Lecture Notes In Computer Science., volume\n5863 of Proceedings of the 16th International Conference on Neural Information Processing: Part I, pages\n57 \u2013 71. Springer-Verlag, 2009.\n\n[7] E.J. Chichilnisky. A simple white noise analysis of neuronal light responses. Network: Computation in\n\nNeural Systems, 12:199\u2013213, 2001.\n\n[8] Jonathan W. Pillow and Eero P. Simoncelli. Dimensionality reduction in neural models: An information-\ntheoretic generalization of spike-triggered average and covariance analysis. Journal of Vision, 6:414\u2013428,\n2006.\n\n[9] J J Eggermont, A M H J Aersten, and P I M Johannesma. Quantitative characterization procedure for\n\nauditory neurons based on the spectra-temporal receptive \ufb01eld. Hearing Research, 10, 1983.\n\n[10] Anmo J. Kim, Aurel A. Lazar, and Yevgeniy B. Slutskiy. System identi\ufb01cation of Drosophila olfactory\n\nsensory neurons. Journal of Computational Neuroscience, 2010.\n\n[11] Sydney Cash and Rafael Yuste. Linear summation of excitatory inputs by CA1 pyramidal neurons.\n\nNeuron, 22:383\u2013394, 1999.\n\n[12] Jonathan Pillow. Neural coding and the statistical modeling of neuronal responses. PhD thesis, New York\n\nUniversity, May 2005.\n\n[13] Aurel A. Lazar and Laszlo T. T\u00b4oth. Perfect recovery and sensitivity analysis of time encoded bandlimited\nsignals. IEEE Transactions on Circuits and Systems-I: Regular Papers, 51(10):2060\u20132073, October 2004.\n[14] Aurel A. Lazar and Eftychios A. Pnevmatikakis. Faithful representation of stimuli with a population of\n\nintegrate-and-\ufb01re neurons. Neural Computation, 20(11):2715\u20132744, November 2008.\n\n[15] Justin Keat, Pamela Reinagel, R. Clay Reid, and Markus Meister. Predicting every spike: A model for the\n\nresponses of visual neurons. Neuron, 30:803\u2013817, June 2001.\n\n[16] Michael Reed and Barry Simon. Methods of Modern Mathematical Physics, Vol. 1, Functional Analysis.\n\nAcademic Press, 1980.\n\n[17] Alain Berlinet and Christine Thomas-Agnan. Reproducing Kernel Hilbert Spaces in Probability and\n\nStatistics. Kluwer Academic Publishers, 2004.\n\n[18] Ole Christensen. An Introduction to Frames and Riesz Bases. Applied and Numerical Harmonic Analysis.\n\nBirkh\u00a8auser, 2003.\n\n[19] Edward H. Adelson and James R. Bergen. Spatiotemporal energy models for the perception of motion.\n\nJournal of Optical Society of America, 2(2), February 1985.\n\n[20] Michael C.-K. Wu, Stephen V. David, and Jack L. Gallant. Complete functional characterization of\n\nsensory neurons by system identi\ufb01cation. Annual Reviews of Neuroscience, 29:477\u2013505, 2006.\n\n9\n\n\f", "award": [], "sourceid": 495, "authors": [{"given_name": "Aurel", "family_name": "Lazar", "institution": null}, {"given_name": "Yevgeniy", "family_name": "Slutskiy", "institution": null}]}