{"title": "Real-Time Decoding of an Integrate and Fire Encoder", "book": "Advances in Neural Information Processing Systems", "page_first": 2906, "page_last": 2914, "abstract": "Neuronal encoding models range from the detailed biophysically-based Hodgkin Huxley model, to the statistical linear time invariant model specifying firing rates in terms of the extrinsic signal. Decoding the former becomes intractable, while the latter does not adequately capture the nonlinearities present in the neuronal encoding system. For use in practical applications, we wish to record the output of neurons, namely spikes, and decode this signal fast in order to drive a machine, for example a prosthetic device. Here, we introduce a causal, real-time decoder of the biophysically-based Integrate and Fire encoding neuron model. We show that the upper bound of the real-time reconstruction error decreases polynomially in time, and that the L2 norm of the error is bounded by a constant that depends on the density of the spikes, as well as the bandwidth and the decay of the input signal. We numerically validate the effect of these parameters on the reconstruction error.", "full_text": "Real-Time Decoding of an Integrate and Fire Encoder\n\nShreya Saxena and Munther Dahleh\n\nDepartment of Electrical Engineering and Computer Sciences\n\nMassachusetts Institute of Technology\n\nCambridge, MA 02139\n\n{ssaxena,dahleh}@mit.edu\n\nAbstract\n\nNeuronal encoding models range from the detailed biophysically-based Hodgkin\nHuxley model, to the statistical linear time invariant model specifying \ufb01ring rates\nin terms of the extrinsic signal. Decoding the former becomes intractable, while\nthe latter does not adequately capture the nonlinearities present in the neuronal\nencoding system. For use in practical applications, we wish to record the output\nof neurons, namely spikes, and decode this signal fast in order to act on this signal,\nfor example to drive a prosthetic device. Here, we introduce a causal, real-time\ndecoder of the biophysically-based Integrate and Fire encoding neuron model. We\nshow that the upper bound of the real-time reconstruction error decreases polyno-\nmially in time, and that the L2 norm of the error is bounded by a constant that\ndepends on the density of the spikes, as well as the bandwidth and the decay of\nthe input signal. We numerically validate the effect of these parameters on the\nreconstruction error.\n\n1\n\nIntroduction\n\nOne of the most detailed and widely accepted models of the neuron is the Hodgkin Huxley (HH)\nmodel [1].\nIt is a complex nonlinear model comprising of four differential equations governing\nthe membrane potential dynamics as well as the dynamics of the sodium, potassium and calcium\ncurrents found in a neuron. We assume in the practical setting that we are recording multiple neurons\nusing an extracellular electrode, and thus that the observable postprocessed outputs of each neuron\nare the time points at which the membrane voltage crosses a threshold, also known as spikes. Even\nwith complete knowledge of the HH model parameters, it is intractable to decode the extrinsic\nsignal applied to the neuron given only the spike times. Model reduction techniques are accurate in\ncertain regimes [2]; theoretical studies have also guaranteed an input-output equivalence between a\nmultiplicative or additive extrinsic signal applied to the HH model, and the same signal applied to\nan Integrate and Fire (IAF) neuron model with variable thresholds [3].\nSpeci\ufb01cally, take the example of a decoder in a brain machine interface (BMI) device, where the\ndecoded signal drives a prosthetic limb in order to produce movement. Given the complications\ninvolved in decoding an extrinsic signal using a realistic neuron model, current practices include\ndecoding using a Kalman \ufb01lter, which assumes a linear time invariant (LTI) encoding with the ex-\ntrinsic signal as an input and the \ufb01ring rate of the neuron as the output [4\u20136]. Although extremely\ntractable for decoding, this approach ignores the nonlinear processing of the extrinsic current by\nthe neuron. Moreover, assuming \ufb01ring rates as the output of the neuron averages out the data and\nincurs inherent delays in the decoding process. Decoding of spike trains has also been performed\nusing stochastic jump models such as point process models [7, 8], and we are currently exploring\nrelationships between these and our work.\n\n1\n\n\ff (t)\n\nIAF Encoder\n\n{ti}i:|ti|\uf8fft\n\nReal-Time Decoder\n\n\u02dcft(t)\n\nFigure 1: IAF Encoder and a Real-Time Decoder.\n\nWe consider a biophysically inspired IAF neuron model with variable thresholds as the encoding\nmodel.\nIt has been shown that, given the parameters of the model and given the spikes for all\ntime, a bandlimited signal driving the IAF model can be perfectly reconstructed if the spikes are\n\u2018dense enough\u2019 [9\u201311]. This is a Nyquist-type reconstruction formula. However, for this theory\nto be applicable to a real-time setting, as in the case of BMI, we need a causal real-time decoder\nthat estimates the signal at every time t, and an estimate of the time taken for the convergence\nof the reconstructed signal to the real signal. There have also been some approaches for causal\nreconstruction of a signal encoded by an IAF encoder, such as in [12]. However, these do not show\nthe convergence of the estimate to the real signal with the advent of time.\nIn this paper, we introduce a causal real-time decoder (Figure 1) that, given the parameters of the\nIAF encoding process, provides an estimate of the signal at every time, without the need to wait for\na minimum amount of time to start decoding. We show that, under certain conditions on the input\nsignal, the upper bound of the error between the estimated signal and the input signal decreases\npolynomially in time, leading to perfect reconstruction as t ! 1, or a bounded error if a \ufb01nite\nnumber of iterations are used. The bounded input bounded output (BIBO) stability of a decoder is\nextremely important to analyze for the application of a BMI. Here, we show that the L2 norm of the\nerror is bounded, with an upper bound that depends on the bandwidth of the signal, the density of\nthe spikes, and the decay of the input signal.\nWe numerically show the utility of the theory developed here. We \ufb01rst provide example recon-\nstructions using the real-time decoder and compare our results with reconstructions obtained using\nexisting methods. We then show the dependence of the decoding error on the properties of the input\nsignal.\nThe theory and algorithm presented in this paper can be applied to any system that uses an IAF\nencoding device, for example in pluviometry. We introduce some preliminary de\ufb01nitions in Section\n2, and then present our theoretical results in Section 3. We use a model IAF system to numerically\nsimulate the output of an IAF encoder and provide causal real-time reconstruction in Section 4, and\nend with conclusions in Section 5.\n\n2 Preliminaries\nWe \ufb01rst de\ufb01ne the subsets of the L2 space that we consider. L\u2326\n\n2 and L\u2326\n2 = nf 2L 2 | \u02c6f (!) = 0 8!/2 [\u2326, \u2326]o\nL\u2326\n2, = nf g 2L 2 | \u02c6f (!) = 0 8!/2 [\u2326, \u2326]o\nL\u2326\n\n(2)\n, where g(t) = (1+|t|) and \u02c6f (!) = (Ff )(!) is the Fourier transform of f. We will only consider\nsignals in L\u2326\nNext, we de\ufb01ne sinc\u2326(t) and [a,b](t), both of which will play an integral part in the reconstruction\nof signals.\n\n2, for 0.\n\n2, are de\ufb01ned as the following.\n(1)\n\n\u2326t\n\nsinc\u2326(t) =( sin(\u2326t)\n[a,b](t) =\u21e21\n\n1\n\n0\n\nt 6= 0\nt = 0\n\nt 2 [a, b]\notherwise\n\n(3)\n\n(4)\n\nFinally, we de\ufb01ne the encoding system based on an IAF neuron model; we term this the IAF Encoder.\nWe consider that this model has variable thresholds in its most general form, which may be useful if\n\n2\n\n\fZ ti+1\n\nti\n\nit is the result of a model reduction technique such as in [3], or in approaches whereR ti+1\nf (\u2327 )d\u2327\ncan be calculated through other means, such as in [9]. A typical IAF Encoder is de\ufb01ned in the\nfollowing way: given the thresholds {qi} where qi > 0 8i, the spikes {ti} are such that\n\nti\n\nf (\u2327 )d\u2327 = \u00b1qi\n\n(5)\n\nThis signi\ufb01es that the encoder outputs a spike at time ti+1 every time the integralR t\nf (\u2327 )d\u2327 reaches\nthe threshold qi or qi. We assume that the decoder has knowledge of the value of the integral\nas well as the time at which the integral was reached. For a physical representation with neurons\nwhose dynamics can faithfully be modeled using IAF neurons, we can imagine two neurons with\nthe same input f; one neuron spikes when the positive threshold is reached while the other spikes\nwhen the negative threshold is reached. The decoder views the activity of both of these neurons\nand, with knowledge of the corresponding thresholds, decodes the signal accordingly. We can also\ntake the approach of limiting ourselves to positive f (t). In order to remain general in the following\n\nti\n\ntreatment, we assume that we have knowledge ofnR ti+1\n\nti\n\nf (\u2327 )d\u2327o, as well as the corresponding\n\nspike times {ti}.\n3 Theoretical Results\n\nThe following is a theorem introduced in [11], which was also applied to IAF Encoders in [10,13,14].\nWe will later use the operators and concepts introduced in this theorem.\nTheorem 1. Perfect Reconstruction: Given a sampling set {ti}i2Z and the corresponding samples\nR ti+1\nf (\u2327 )d\u2327, we can perfectly reconstruct f 2L \u2326\n2 if supi2Z (ti+1 ti) = for some < \u21e1\n\u2326.\nti\nMoreover, f can be reconstructed iteratively in the following way, such that\n\u21e1 \u25c6k+1\nkf f kk2 \uf8ff\u2713 \u2326\n\nkfk2\n\n(6)\n\n, and limk!1 f k = f in L2.\n\n(7)\n(8)\n\n(9)\n\n(10)\n\n, where the operator Af is de\ufb01ned as the following.\n\nf 0 = Af\nf 1 = (I A )f 0 + Af = (I A )Af + Af\nf k = (I A )f k1 + Af =\n(I A )nAf\n\nkXn=0\n\nAf =\n\n1Xi=1Z ti+1\n\nti\n\nf (\u2327 )d\u2327 sinc\u2326(t si)\n\nand si = ti+ti+1\n\n2\n\n, the midpoint of each pair of spikes.\n\nProof. Provided in [11].\n\nThe above theorem requires an in\ufb01nite number of spikes in order to start decoding. However, we\nwould like a real-time decoder that outputs the \u2018best guess\u2019 at every time t in order for us to act on\nthe estimate of the signal. In this paper, we introduce one such decoder; we \ufb01rst provide a high-level\ndescription of the real-time decoder, then a recursive algorithm to apply in the practical case, and\n\ufb01nally we will provide error bounds for its performance.\nReal-Time Decoder\nAt every time t, the decoder outputs an estimate of the input signal \u02dcft(t), where \u02dcft(t) is an estimate\nof the signal calculated using all the spikes from time 0 to t. Since there is no new information\nbetween spikes, this is essentially the same as calculating an estimate after every spike ti, \u02dcfti(t),\nand using this estimate till the next spike, i.e. for time t 2 [ti, ti+1] (see Figure 2).\n\n3\n\n\f\uf025ft1(t)\n\n\uf025ft2(t) = \uf025ft1(t) + gt2(t)\n\n\uf025ft3(t)\n\nf (t)\n\n\uf025ft(t)\n\n0\n\nt0\n\nt1\n\nt2\n\nt3\n\nt4\n\nt5\n\nt6\n\nt7\n\nt\n\nFigure 2: A visualization of the decoding process. The original signal f (t) is shown in black and the\nspikes {ti} are shown in blue. As each spike ti arrives, a new estimate \u02dcfti(t) of the signal is formed\n(shown in green), which is modi\ufb01ed after the next spike ti+1 by the innovation function gti+1. The\noutput of the decoder \u02dcft(t) =Pi2Z\n\n\u02dcfti(t) [ti,ti+1)(t) is shown in red.\n\nWe will show that we can calculate the estimate after every spike \u02dcfti+1 as the sum of the previous\nestimate \u02dcfti and an innovation gti+1. This procedure is captured in the algorithm given in Equations\n11 and 12.\nRecursive Algorithm\n\u02dcf 0\nti+1 = \u02dcf 0\n\u02dcf k\nti+1 = \u02dcf k\n\nti+1\nti+1 = \u02dcf k\n\nti + g0\nti + gk\n\nti+1 + g0\n\n(12)\n\n(11)\n\nHere, \u02dcf 0\nt0 = 0, and g0\ngti+1(t) = limk!1 gk\n\nti+1(t) =\u21e3R ti+1\n\nf (\u2327 )d\u2327\u2318 sinc(t si). We denote \u02dcfti(t) = limk!1\n\nti+1(t). We de\ufb01ne the operator AT f used in Equation 12 as the following.\n\nti\n\n\u02dcf k\nti(t) and\n\nti +\u21e3gk1\n\nti+1\u2318\nti+1 A ti+1gk1\n\nAT f = Xi:|ti|\uf8ffTZ ti+1\n\nti\n\nf (\u2327 )d\u2327 sinc\u2326(t si)\n\n(13)\n\nThe output of our causal real-time decoder can also be written as \u02dcft(t) =Pi2Z\n\u02dcf K\nti (t) [ti,ti+1)(t). { \u02dcf k\n\n\u02dcfti(t) [ti,ti+1)(t).\nIn the case of a decoder that uses a \ufb01nite number of iterations K at every step, i.e. calculates \u02dcf K\nti\nafter every spike ti, the decoded signal is \u02dcf K\nti}k are stored after\nevery spike ti, and thus do not need to be recomputed at the arrival of the next spike. Thus, when a\nnew spike arrives at ti+1, each \u02dcf k\nNext, we show an upper bound on the error incurred by the decoder.\nTheorem 2. Real-time reconstruction: Given a signal f 2L \u2326\n2, passed through an IAF encoder\nwith known thresholds, and given that the spikes satisfy a certain minimum density supi2Z(ti+1 \n\u21e1 , we can construct a causal real-time decoder that reconstructs a function\nti) = for some < \u2326\n\u02dcft(t) using the recursive algorithm in Equations 11 and 12, s.t.\n\nti can be modi\ufb01ed by adding the innovation functions gk\n\nt (t) =Pi2Z\n\nti+1.\n\n|f (t) \u02dcft(t)|\uf8ff\n\n\u21e1 kfk2,(1 + t)\n\nc\n\n1 \u2326\n4\n\n(14)\n\n\f, where c depends only on , \u2326 and .\nMoreover, if we use a \ufb01nite number of iterations K at every step, we obtain the following error.\n\n|f (t) \u02dcf K\n\nt (t)|\uf8ff c\n\n\u21e1K+1\n1 \u2326\n1 \u2326\n\n\u21e1\n\nProof. Provided in the Appendix.\n\nkfk2,(1 + t) +\u2713 \u2326\n\n\u21e1 \u25c6K+1 1 + \u2326\n1 \u2326\n\n\u21e1\n\n\u21e1 kfk2\n\n(15)\n\nTheorem 2 is the main result of this paper. It shows that the upper bound of the real-time reconstruc-\ntion error using the decoding algorithm in Equations 11 and 12, decreases polynomially as a function\nof time. This implies that the approximation \u02dcft(t) becomes more and more accurate with the passage\nof time, and moreover, we can calculate the exact amount of time we would need to record to have a\ngiven level of accuracy. Given a maximum allowed error \u270f, these bounds can provide a combination\n(t, K) that will ensure |f (t) \u02dcf K\nWe can further show that the L2 norm of the reconstruction remains bounded with a bounded in-\nput (BIBO stability), by bounding the L2 norm of the error between the original signal and the\nreconstruction.\nCorollary 1. Bounded L2 norm: The causal decoder provided in Theorem 2, with the same as-\nsumptions and in the case of K ! 1, constructs a signal \u02dcft(t) s.t.\nthe L2 norm of the error\n|f (t) \u02dcft(t)|2dt is bounded: kf \u02dcftk2 \uf8ff c/p21\nkfk2, where c is the same\n1 \u2326\n\n2,, and if the density constraint is met.\n\nt (t)|\uf8ff \u270f if f 2L \u2326\n\nkf \u02dcftk2 =qR 1\n\nconstant as in Theorem 2.\n\n0\n\n\u21e1\n\nProof.\n\nsZ 1\n\n0\n\n|f (t) \u02dcft(t)|2dt \uf8ff vuutZ 1\n\n0 c\n1 \u2326\n\n\u21e1 !2\n\nkfk2\n\n2, (1 + t)2dt =\n\nc/p2 1\n1 \u2326\n\n\u21e1\n\nkfk2, (16)\n\nHere, the \ufb01rst inequality is due to Theorem 2, and all the constants are as de\ufb01ned in the same.\n\nRemark 1: This result also implies that we have a decay in the root-mean-square (RMS) error, i.e.\nq 1\nT R T\n0 |f (t) \u02dcft(t)|2dt T!1! 0. For the case of a \ufb01nite number of iterations K < 1, the RMS\nerror converges to a non-zero constant \u2326\nRemark 2: The methods used in Corollary 1 also provide a bound on the error in the weighted L2\nnorm, i.e. kf \u02dcfk2, \uf8ff c/p1\n\u21e1 kfk2, for 2, which may be a more intuitive form to use for a\n1 \u2326\nsubsequent stability analysis.\n\n\u21e1K+1 1+ \u2326\n\n\u21e1 kfk2.\n\n1 \u2326\n\n\u21e1\n\n4 Numerical Simulations\nWe simulated signals f (t) of the following form, for t 2 [0, 100], using a stepsize of 102.\n\nf (t) = P50\n\ni=1 wk (sinc\u2326 (t dk))\n\ni=1 wk\n\nP50\n\n(17)\n\nHere, the wk\u2019s and dk\u2019s were picked uniformly at random from the interval [0, 1] and [0, 100] re-\nspectively. Note that f 2L \u2326\n2,. All simulations were performed using MATLAB R2014a. For each\nsimulation experiment, at every time t we decoded using only the spikes before time t.\nWe \ufb01rst provide example reconstructions using the Real-Time Decoder for four signals in Figure 3,\nusing constant thresholds, i.e. qi = q 8i. We compare our results to those obtained using a Linear\nFiring Rate (FR) Decoder, i.e. we let the reconstructed signal be a linear function of the number\nof spikes in the past seconds, being the window size. We can see that there is a delay in the\nreconstruction with this decoding approach. Moreover, the reconstruction is not as accurate as that\nusing the Real-Time Decoder.\n\n5\n\n\fe\nd\nu\n\nt\ni\nl\n\np\nm\nA\n\n0.1\n\n0.08\n\n0.06\n\n0.04\n\n0.02\n\ne\nd\nu\n\nt\ni\nl\n\np\nm\nA\n\n0.1\n\n0.08\n\n0.06\n\n0.04\n\n0.02\n\n0\n0\n\n20\n\n40\n\nTime (s)\n\n60\n\n80\n\n0\n0\n\n20\n\n40\n\nTime (s)\n\n60\n\n80\n\n(a) \u2326= 0 .2\u21e1; Real-Time Decoder\n\n(b) \u2326= 0 .2\u21e1; Linear FR Decoder\n\n0.1\n\n0.08\n\n0.06\n\n0.04\n\n0.02\n\n0.1\n\n0.08\n\n0.06\n\n0.04\n\n0.02\n\ne\nd\nu\n\nt\ni\nl\n\np\nm\nA\n\ne\nd\nu\n\nt\ni\nl\n\np\nm\nA\n\n0\n0\n\n20\n\n40\n\nTime (s)\n\n60\n\n80\n\n0\n0\n\n20\n\n40\n\nTime (s)\n\n60\n\n80\n\n(c) \u2326= 0 .3\u21e1; Real-Time Decoder\n0.1\n\n(d) \u2326= 0 .3\u21e1; Linear FR Decoder\n0.1\n\ne\nd\nu\n\nt\ni\nl\n\np\nm\nA\n\n0.08\n\n0.06\n\n0.04\n\n0.02\n\ne\nd\nu\n\nt\ni\nl\n\np\nm\nA\n\n0.08\n\n0.06\n\n0.04\n\n0.02\n\n0\n0\n\n20\n\n40\n\nTime (s)\n\n60\n\n80\n\n0\n0\n\n20\n\n40\n\nTime (s)\n\n60\n\n80\n\n(e) \u2326= 0 .4\u21e1; Real-Time Decoder\n\n(f) \u2326= 0 .4\u21e1; Linear FR Decoder\n\n0.08\n\n0.07\n\n0.06\n\n0.05\n\n0.04\n\n0.03\n\n0.02\n\n0.01\n\ne\nd\nu\nt\ni\nl\n\np\nm\nA\n\n0\n0\n\n20\n\n40\n\nTime (s)\n\n60\n\n80\n\n0.08\n\n0.07\n\n0.06\n\ne\nd\nu\nt\ni\nl\n\np\nm\nA\n\n0.05\n\n0.04\n\n0.03\n\n0.02\n\n0.01\n\n0\n0\n\n20\n\n40\n\nTime (s)\n\n60\n\n80\n\n(g) \u2326= 0 .5\u21e1; Real-Time Decoder\n\n(h) \u2326= 0 .5\u21e1; Linear FR Decoder\n\nFigure 3: (a,c,e,g) Four example reconstructions using the Real-Time Decoder, with the original\nsignal f (t) in black solid and the reconstructed signal \u02dcft(t) in red dashed lines. Here, [, K ] =\n[2, 500], and qi = 0.01 8i. (b,d,f,h) The same signal was decoded using a Linear Firing Rate (FR)\nDecoder. A window size of = 3 s was used.\n\n6\n\n\f\u22124\n\nx 10\n\n3\n\n\u22124\n\nx 10\n\n2.5\n\n2\n!\nt\n\u02dcf\n\u2212\nf\n!\n\n\u03b2\n\n,\n2\n!\nf\n!\n\n2\n\n1\n\n0\n0.1pi\n\n0.2pi\n\n\u2126\n\n0.3pi\n\n0.4pi\n\n2\n\n2\n!\nt\n\u02dcf\n\u2212\nf\n!\n\n\u03b2\n\n,\n2\n!\nf\n!\n\n1.5\n\n1\n\n0.5\n\n0\n0.6\n\n0.8\n\n1\n\n1.2\n\n1.4\n\n1.6\n\n\u03b4\n\n(a) \u2326 is varied; [, , K ] = [2, \u21e1\n\n2\u2326 , 500]\n\n(b) is varied; [\u2326,, K ] = [0.3\u21e1, 2, 500]\n\n\u22124\n\n10\n\n\u22126\n\n10\n\n2\n!\nt\n\u02dcf\n\u2212\nf\n!\n\n\u03b2\n\n,\n2\n!\nf\n!\n\n\u22128\n\n10\n\n\u22124\n\nx 10\n\n2\n\n2\n!\nt\n\u02dcf\n\u2212\nf\n!\n\n\u03b2\n\n,\n2\n!\nf\n!\n\n1\n\n\u221210\n\n10\n\n2\n\n2.5\n\n3\n\n3.5\n\u03b2\n\n4\n\n4.5\n\n5\n\n0\n\n0\n\n100\n\n200\n\n300\n\n400\n\n500\n\nK\n\n(c) is varied; [\u2326,, K ] = [0.3\u21e1,\n\n1\n0.3 , 500]\n\n(d) K is varied; [\u2326,, ] = [0.3\u21e1, 5\n\n3 , 2]\n\nFigure 4: Average error for 20 different signals while varying different parameters.\n\nNext, we show the decay of the real-time error by averaging out the error for 20 different input\nsignals, while varying certain parameters, namely \u2326, , and K (Figure 4). The thresholds qi were\nchosen to be constant a priori, but were reduced to satisfy the density constraint wherever necessary.\nAccording to Equation 14 (including the effect of the constant c), the error should decrease as \u2326 is\ndecreased. We see this effect in the simulation study in Figure 4a. For these simulations, we chose\n such that \u2326\n\u21e1 < 1, thus was decreasing as \u2326 increased; however, the effect of the increasing \u2326\ndominated in this case.\nIn Figure 4b we see that increasing while keeping the bandwidth constant does indeed increase the\nerror, thus the algorithm is sensitive to the density of the spikes. In this \ufb01gure, all the values of \nsatisfy the density constraint, i.e. \u2326\nIncreasing is seen to have a large effect, as seen in Figure 4c: the error decreases polynomially\nin (note the log scale on the y-axis). Although increasing in our simulations also increased\nthe bandwidth of the signal, the faster decay had a larger effect on the error than the change in\nbandwidth.\nIn Figure 4d, the effect of increasing K is apparent; however, this error \ufb02attens out for large values\nof K, showing convergence of the algorithm.\n\n\u21e1 < 1.\n\n7\n\n\f5 Conclusions\n\nWe provide a real-time decoder to reconstruct a signal f 2L \u2326\n2, encoded by an IAF encoder. Under\nNyquist-type spike density conditions, we show that the reconstructed signal \u02dcft(t) converges to f (t)\npolynomially in time, or with a \ufb01xed error that depends on the computation power used to reconstruct\nthe function. Moreover, we get a lower error as the spike density increases, i.e. we get better results\nif we have more spikes. Decreasing the bandwidth or increasing the decay of the signal both lead to\na decrease in the error, corroborated by the numerical simulations. This decoder also outperforms\nthe linear decoder that acts on the \ufb01ring rate of the neuron. However, the main utility of this decoder\nis that it comes with veri\ufb01able bounds on the error of decoding as we record more spikes.\nThere is a severe need in the BMI community for considering error bounds while decoding signals\nfrom the brain. For example, in the case where the reconstructed signal is driving a prosthetic, we are\nusually placing the decoder and machine in an inherent feedback loop (where the feedback is visual\nin this case). A stability analysis of this feedback loop includes calculating a bound on the error\nincurred by the decoding process, which is the \ufb01rst step for the construction of a device that robustly\ntracks agile maneuvers. In this paper, we provide an upper bound on the error incurred by the real-\ntime decoding process, which can be used along with concepts in robust control theory to provide\nsuf\ufb01cient conditions on the prosthetic and feedback system in order to ensure stability [15\u201317].\n\nAcknowledgments\nResearch supported by the National Science Foundation\u2019s Emerging Frontiers in Research and In-\nnovation Grant (1137237).\n\nReferences\n[1] A. L. Hodgkin and A. F. Huxley, \u201cA quantitative description of membrane current and its\napplication to conduction and excitation in nerve,\u201d The Journal of physiology, vol. 117, no. 4,\np. 500, 1952.\n\n[2] W. Gerstner and W. M. Kistler, Spiking neuron models: Single neurons, populations, plasticity.\n\nCambridge university press, 2002.\n\n[3] A. A. Lazar, \u201cPopulation encoding with hodgkin\u2013huxley neurons,\u201d Information Theory, IEEE\n\nTransactions on, vol. 56, no. 2, pp. 821\u2013837, 2010.\n\n[4] J. M. Carmena, M. A. Lebedev, R. E. Crist, J. E. O\u2019Doherty, D. M. Santucci, D. F. Dimitrov,\nP. G. Patil, C. S. Henriquez, and M. A. Nicolelis, \u201cLearning to control a brain\u2013machine inter-\nface for reaching and grasping by primates,\u201d PLoS biology, vol. 1, no. 2, p. e42, 2003.\n\n[5] M. D. Serruya, N. G. Hatsopoulos, L. Paninski, M. R. Fellows, and J. P. Donoghue, \u201cBrain-\nmachine interface: Instant neural control of a movement signal,\u201d Nature, vol. 416, no. 6877,\npp. 141\u2013142, 2002.\n\n[6] W. Wu, J. E. Kulkarni, N. G. Hatsopoulos, and L. Paninski, \u201cNeural decoding of hand mo-\ntion using a linear state-space model with hidden states,\u201d Neural Systems and Rehabilitation\nEngineering, IEEE Transactions on, vol. 17, no. 4, pp. 370\u2013378, 2009.\n\n[7] E. N. Brown, L. M. Frank, D. Tang, M. C. Quirk, and M. A. Wilson, \u201cA statistical paradigm for\nneural spike train decoding applied to position prediction from ensemble \ufb01ring patterns of rat\nhippocampal place cells,\u201d The Journal of Neuroscience, vol. 18, no. 18, pp. 7411\u20137425, 1998.\n[8] U. T. Eden, L. M. Frank, R. Barbieri, V. Solo, and E. N. Brown, \u201cDynamic analysis of neural\nencoding by point process adaptive \ufb01ltering,\u201d Neural Computation, vol. 16, no. 5, pp. 971\u2013998,\n2004.\n\n[9] A. A. Lazar, \u201cTime encoding with an integrate-and-\ufb01re neuron with a refractory period,\u201d Neu-\n\nrocomputing, vol. 58, pp. 53\u201358, 2004.\n\n[10] A. A. Lazar and L. T. T\u00b4oth, \u201cTime encoding and perfect recovery of bandlimited signals,\u201d\n\nProceedings of the ICASSP, vol. 3, pp. 709\u2013712, 2003.\n\n[11] H. G. Feichtinger and K. Gr\u00a8ochenig, \u201cTheory and practice of irregular sampling,\u201d Wavelets:\n\nmathematics and applications, vol. 1994, pp. 305\u2013363, 1994.\n\n8\n\n\f[12] H. G. Feichtinger, J. C. Pr\u00b4\u0131ncipe, J. L. Romero, A. S. Alvarado, and G. A. Velasco, \u201cApprox-\nimate reconstruction of bandlimited functions for the integrate and \ufb01re sampler,\u201d Advances in\ncomputational mathematics, vol. 36, no. 1, pp. 67\u201378, 2012.\n\n[13] A. A. Lazar and L. T. T\u00b4oth, \u201cPerfect recovery and sensitivity analysis of time encoded bandlim-\nited signals,\u201d Circuits and Systems I: Regular Papers, IEEE Transactions on, vol. 51, no. 10,\npp. 2060\u20132073, 2004.\n\n[14] D. Gontier and M. Vetterli, \u201cSampling based on timing: Time encoding machines on shift-\ninvariant subspaces,\u201d Applied and Computational Harmonic Analysis, vol. 36, no. 1, pp. 63\u201378,\n2014.\n\n[15] S. V. Sarma and M. A. Dahleh, \u201cRemote control over noisy communication channels: A \ufb01rst-\norder example,\u201d Automatic Control, IEEE Transactions on, vol. 52, no. 2, pp. 284\u2013289, 2007.\n[16] \u2014\u2014, \u201cSignal reconstruction in the presence of \ufb01nite-rate measurements: \ufb01nite-horizon control\napplications,\u201d International Journal of Robust and Nonlinear Control, vol. 20, no. 1, pp. 41\u201358,\n2010.\n\n[17] S. Saxena and M. A. Dahleh, \u201cAnalyzing the effect of an integrate and \ufb01re encoder and decoder\n\nin feedback,\u201d Proceedings of 53rd IEEE Conference on Decision and Control (CDC), 2014.\n\n9\n\n\f", "award": [], "sourceid": 1510, "authors": [{"given_name": "Shreya", "family_name": "Saxena", "institution": "Massachusetts Institute of Technology"}, {"given_name": "Munther", "family_name": "Dahleh", "institution": "Massachusetts Institute of Tehcnology"}]}