{"title": "Neuronal Spike Generation Mechanism as an Oversampling, Noise-shaping A-to-D converter", "book": "Advances in Neural Information Processing Systems", "page_first": 503, "page_last": 511, "abstract": "We explore the hypothesis that the neuronal spike generation mechanism is an analog-to-digital converter, which rectifies low-pass filtered summed synaptic currents and encodes them into spike trains linearly decodable in post-synaptic neurons. To digitally encode an analog current waveform, the sampling rate of the spike generation mechanism must exceed its Nyquist rate. Such oversampling is consistent with the experimental observation that the precision of the spike-generation mechanism is an order of magnitude greater than the cut-off frequency of dendritic low-pass filtering. To achieve additional reduction in the error of analog-to-digital conversion, electrical engineers rely on noise-shaping. If noise-shaping were used in neurons, it would introduce correlations in spike timing to reduce low-frequency (up to Nyquist) transmission error at the cost of high-frequency one (from Nyquist to sampling rate). Using experimental data from three different classes of neurons, we demonstrate that biological neurons utilize noise-shaping. We also argue that rectification by the spike-generation mechanism may improve energy efficiency and carry out de-noising. Finally, the zoo of ion channels in neurons may be viewed as a set of predictors, various subsets of which are activated depending on the statistics of the input current.", "full_text": "Neuronal Spike Generation Mechanism as an \n\nOversampling, Noise -shaping A -to-D Converter \n\n \n\n \n\n Dmitri B. Chklovskii \n Janelia Farm Research Campus Department of Electrical Engineering \n Howard Hughes Medical Institute \n mitya@janelia.hhmi.org \n\ndaniel.soudry@gmail.com \n\n Daniel Soudry \n\nTechnion \n\nAbstract \n\nWe test the hypothesis that the neuronal spike generation mechanism is an \nanalog-to-digital (AD) converter encoding rectified \nlow-pass filtered \nsummed synaptic currents into a spike train linearly decodable in post-\nsynaptic neurons. Faithful encoding of an analog waveform by a binary \nsignal requires that the spike generation mechanism has a sampling rate \nexceeding the Nyquist rate of the analog signal. Such oversampling is \nconsistent with the experimental observation that the precision of the spike-\ngeneration mechanism is an order of magnitude greater than the cut -off \nfrequency of low-pass filtering in dendrites. Additional improvement in the \ncoding accuracy may be achieved by noise-shaping, a technique used in \nsignal processing. If noise-shaping were used in neurons, it would reduce \ncoding error relative to Poisson spike generator for frequencies below \nNyquist by introducing correlations into spike times. By using experimental \ndata from three different classes of neurons, we demonstrate that biological \nneurons utilize noise-shaping. Therefore, the spike-generation mechanism \ncan be viewed as an oversampling and noise-shaping AD converter. \n\nThe nature of the neural spike code remains a central problem in neuroscience [1-3]. In particular, \nno consensus exists on whether information is encoded in firing rates [4, 5] or individual spike \ntiming [6, 7]. On the single-neuron level, evidence exists to support both points of view. On the \none hand, post-synaptic currents are low-pass-filtered by dendrites with the cut-off frequency of \napproximately 30Hz [8], Figure 1B, providing ammunition for the firing rate camp: if the signal \nreaching the soma is slowly varying, why would precise spike timing be necessary? On the other \nhand, the ability of the spike-generation mechanism to encode harmonics of the injected current up \nto about 300Hz [9, 10], Figure 1B, points at its exquisite temporal precision [11]. Yet, in view of \nthe slow variation of the somatic current, such precision may seem gratuitous and puzzling. \n\nThe timescale mismatch between gradual variation of the somatic current and high precision of \nspike generation has been addressed previously. Existing explanations often rely on the population \nnature of the neural code [10, 12]. Although this is a distinct possibility, the question remains \nwhether invoking population coding is necessary. Other possible explanations for the timescale \nmismatch include the possibility that some synaptic currents (for example, GABAergic) may be \ngenerated by synapses proximal to the soma and therefore not subject to low-pass filtering or that \nthe high frequency harmonics are so strong in the pre-synaptic spike that despite attenuation, their \ntrace is still present. Although in some cases, these explanations could apply, for the majority of \nsynaptic inputs to typical neurons there is a glaring mismatch. \n\nThe perceived mismatch between the time scales of somatic currents and the spike-generation \nmechanism can be resolved naturally if one views spike trains as digitally encoding analog \nsomatic currents [13-15], Figure 1A. Although somatic currents vary slowly, information that \ncould be communicated by their analog amplitude far exceeds that of binary signals, such as all-\n\n\for-none spikes, of the same sampling rate. Therefore, faithful digital encoding requires sampling \nrate of the digital signal to be much higher than the cut-off frequency of the analog signal, so-\ncalled over-sampling. Although the spike generation mechanism operates in continuous time, the \nhigh temporal precision of the spike-\ngeneration mechanism may be viewed as a \nmanifestation of oversampling, which is \nneeded for the digital encoding of the \nanalog signal. Therefore, the extra order of \nmagnitude in temporal precision available \nto \nspike-generation mechanism \nrelative to somatic current, Figure 1B, is \nthe \nnecessary \nto \namplitude of \nthus \npotentially reconciling the firing rate and \nthe spike timing points of view [13-15]. \n\nencode \nthe analog signal, \n\nthe \n\nfaithfully \n\nFigure 1. Hybrid digital-analog operation of neuronal circuits. A. Post-synaptic currents are \nlow-pass filtered and summed in dendrites (black) to produce a somatic current (blue). This analog \nsignal is converted by the spike generation mechanism into a sequence of all-or-none spikes \n(green), a digital signal. Spikes propagate along an axon and are chemically transduced across \nsynapses (gray) into post-synatpic currents (black), whose amplitude reflects synaptic weights, \nthus converting digital signal back to analog. B. Frequency response function for dendrites (blue, \nadapted from [8]) and for the spike generation mechanism (green, adapted from [9]). Note one \norder of magnitude gap between the cut off frequencies. C. Amplitude of the summed post-\nsynaptic currents depends strongly on spike timing. If the blue spike arrives just 5ms later, as \nshown in red, the EPSCs sum to a value already 20% less. Therefore, the extra precision of the \ndigital signal may be used to communicate the amplitude of the analog signal. \n\nIn signal processing, efficient AD conversion combines the principle of oversampling with that of \nnoise-shaping, which utilizes correlations in the digital signal to allow more accurate encoding of \nthe analog amplitude. This is exemplified by a family of AD converters called \uf044\uf053\uf020modulators \n[16], of which the basic one is analogous to an integrate-and-fire (IF) neuron [13-15]. The analogy \nbetween the basic \uf044\uf053\uf020modulator and the IF neuron led to the suggestion that neurons also use \nnoise-shaping to encode incoming analog current waveform in the digital spike train [13]. \nHowever, the hypothesis of noise-shaping AD conversion has never been tested experimentally in \nbiological neurons. \n\nIn this paper, by analyzing existing experimental datasets, we demonstrate that noise-shaping is \npresent in three different classes of neurons from vertebrates and invertebrates. This lends support \nto the view that neurons act as oversampling and noise-shaping AD converters and accounts for \nthe mismatch between the slowly varying somatic currents and precise spike timing. Moreover, we \nshow that the degree of noise-shaping in biological neurons exceeds that used by basic \uf044\uf053 \nmodulators or IF neurons and propose viewing more complicated models in the noise-shaping \nframework. This paper is organized as follows: We review the principles of oversampling and \nnoise-shaping in Section 2. In Section 3, we present experimental evidence for noise-shaping AD \nconversion in neurons. In Section 4 we argue that rectification of somatic currents may improve \nenergy efficiency and/or implement de-noising. \n\n2 . Oversampling and noise-shaping in AD converters \n\nTo understand how oversampling can lead to more accurate encoding of the analog signal \namplitude in a digital form, we first consider a Poisson spike encoder, whose rate of spiking is \nmodulated by the signal amplitude, Figure 2A. Such an AD converter samples an analog signal at \ndiscrete time points and generates a spike with a probability given by the (normalized) signal \namplitude. Because of the binary nature of spike trains, the resulting spike train encodes the signal \nwith a large error even when the sampling is done at Nyquist rate, i.e. the lowest rate for alias-free \nsampling. \n\n\fTo reduce the encoding error a Poisson encoder can sample at frequencies, fs , higher than \nNyquist, fN \u2013 hence, the term oversampling, Figure 2B. When combined with decoding by low-\npass filtering (down to Nyquist) on the receiving end, this leads to a reduction of the error, which \ncan be estimated as follows. The number of samples over a Nyquist half-period (1/2fN) is given by \nthe oversampling ratio: \n\n \n\n \n \n\n. \n\nAs the normalized signal amplitude, \n , stays roughly constant over \nthe Nyquist half-period, \nit can be \nencoded by spikes generated with a \nfixed probability, x. For a Poisson \nprocess the variance in the number of \nspikes is equal to the mean, \n . Therefore, \nthe \nmean relative error of \nthe signal \ndecoded by averaging over the Nyquist \nhalf-period: \n\n , (1) \n\nindicating that oversampling reduces \ntransmission error. However, the weak \ndependence of \nthe \noversampling \nindicates \ndiminishing returns on the investment \nin oversampling and motivates one to \nsearch for other ways to lower the error. \n\nthe error on \n\nfrequency \n\nFigure 2. Oversampling and noise-shaping in AD conversion. A. Analog somatic current (blue) \nand its digital code (green). The difference between the green and the blue curves is encoding \nerror. B. Digital output of oversampling Poisson encoder over one Nyquist half-period. C. Error \npower spectrum of a Nyquist (dark green) and oversampled (light green) Poisson encoder. \nAlthough the total error power is the same, the fraction surviving low-pass filtering during \ndecoding (solid green) is smaller in oversampled case. D. Basic \uf044\uf053 modulator. E. Signal at the \noutput of the integrator. F. Digital output of the \uf044\uf053 modulator over one Nyquist period. G. Error \npower spectrum of the \uf044\uf053 modulator (brown) is shifted to higher frequencies and low-pass filtered \nduring decoding. The remaining error power (solid brown) is smaller than for Poisson encoder. \n\nTo reduce encoding error beyond the \u00bd power of the oversampling ratio, the principle of noise-\nshaping was put forward [17]. To illustrate noise-shaping consider a basic AD converter called \uf044\uf053 \n[18], Figure 2D. In the basic \uf044\uf053 modulator, the previous quantized signal is fed back and \nsubtracted from the incoming signal and then the difference is integrated in time. Rather than \nquantizing the input signal, as would be done in the Poisson encoder, \uf044\uf053 modulator quantizes the \nintegral of the difference between the incoming analog signal and the previous quantized signal, \nFigure 2F. One can see that, in the oversampling regime, the quantization error of the basic \uf044\uf053 \nmodulator is significantly less than that of the Poisson encoder. As the variance in the number of \nspikes over the Nyquist period is less than one, the mean relative error of the signal is at most, \n , which is better than the Poisson encoder. \n\nTo gain additional insight and understand the origin of the term noise-shaping, we repeat the \nabove analysis in the Fourier domain. First, the Poisson encoder has a flat power spectrum up to \nthe sampling frequency, Figure 2C. Oversampling preserves the total error power but extends the \nfrequency range resulting in the lower error power below Nyquist. Second, a more detailed \nanalysis of the basic \uf044\uf053 modulator, where the dynamics is linearized by replacing the quantization \ndevice with a random noise injection [19], shows that the quantization noise is effectively \ndifferentiated. Taking the derivative in time is equivalent to multiplying the power spectrum of the \n\n\fquantization noise by frequency squared. Such reduction of noise power at low frequencies is an \nexample of noise shaping, Figure 2G. Under the additional assumption of the white quantization \nnoise, such analysis yields: \n\n , \n\n \n\n \n\n \n\n(2) \n\nwhich for R >> 1 is significantly better performance than for the Poisson encoder, Eq.(1). \n\nAs mentioned previously, the basic \uf044\uf053 modulator, Figure 2D, in the continuous-time regime is \nnothing other than an IF neuron [13, 20, 21]. In the IF neuron, quantization is implemented by the \nspike generation mechanism and the negative feedback corresponds to the after-spike reset. Note \nthat resetting the integrator to zero is strictly equivalent to subtraction only for continuous-time \noperation. In discrete-time computer simulations, the integrator value may exceed the threshold, \nand, therefore, subtraction of the threshold value rather than reset must be used. Next, motivated \nby the \uf044\uf053-IF analogy, we look for the signs of noise-shaping AD conversion in real neurons. \n\n3 . Experimental evidence of noise-shaping AD conversion in real neurons \n\nIn order to determine whether noise-shaping AD conversion takes place in biological neurons, we \nanalyzed three experimental datasets, where spike trains were generated by time-varying somatic \ncurrents: 1) rat somatosensory cortex L5 pyramidal neurons [9], 2) mouse olfactory mitral cells \n[22, 23], and 3) fruit fly olfactory receptor neurons [24]. In the first two datasets, the current was \ninjected through an electrode in whole-cell patch clamp mode, while in the third, the recording \nwas extracellular and the intrinsic somatic current could be measured because the glial \ncompartment included only one active neuron. \n\nTesting the noise-shaping AD conversion hypothesis is complicated by the fact that encoded and \ndecoded signals are hard to measure accurately. First, as somatic current is rectified by the spike-\ngeneration mechanism, only its super-threshold component can be encoded faithfully making it \nhard to know exactly what is being encoded. Second, decoding in the dendrites is not accessible in \nthese single-neuron recordings. \n\nIn view of these difficulties, we start by simply computing the power spectrum of the \nreconstruction error obtained by subtracting a scaled and shifted, but otherwise unaltered, spike \ntrain from the somatic current. The scaling factor was determined by the total weight of the \ndecoding linear filter and the shift was optimized to maximize information capacity, see below. At \nthe frequencies below 20Hz the error contains significantly lower power than the input signal, \nFigure 3, indicating that the spike generation mechanism may be viewed as an AD converter. \nFurthermore, the error power spectrum of the biological neuron is below that of the Poisson \nencoder, thus indicating the presence of noise-shaping. For dataset 3 we also plot the error power \nspectrum of the IF neuron, the threshold of which is chosen to generate the same number of spikes \nas the biological neuron. \n\nFigure 3. Evidence of noise-shaping. Power spectra of the somatic current (blue), difference \nbetween the somatic current and the digital spike train of the biological neuron (black), of the \nPoisson encoder (green) and of the IF neuron (red). Left: datset 1, right: dataset 3. \n\n \n\n0102030405060708090102103104Frequency [Hz]Spectral power, a.u. 010203040506070809010010-410-310-210-1100101Frequency [Hz]Spectral power, a.u. somatic currentbiological neuron errorPoisson encoder errorI&F neuron error\fAlthough the simple analysis presented above indicates noise-shaping, subtracting the spike train \nfrom the input signal, Figure 3, does not accurately quantify the error when decoding involves \nadditional filtering. An example of such additional encoding/decoding is predictive coding, which \nwill be discussed below [25]. To take such decoding filter into account, we computed a decoded \nwaveform by convolving the spike train with the optimal linear filter, which predicts the somatic \ncurrent from the spike train with the least mean squared error. \n\nOur linear decoding analysis lends additional support to the noise-shaping AD conversion \nhypothesis [13-15]. First, the optimal linear filter shape is similar to unitary post-synaptic currents, \nFigure 4B, thus supporting the view that dendrites reconstruct the somatic current of the pre-\nsynaptic neuron by low-pass filtering the spike train in accordance with the noise-shaping \nprinciple [13]. Second, we found that linear decoding using an optimal filter accounts for 60-80% \nof the somatic current variance. Naturally, such prediction works better for neurons in supra-\nthreshold regime, i.e. with high firing rates, an issue to which we return in Section 4. To avoid \ncomplications associated with rectification for now we focused on neurons which were in supra-\nthreshold regime by monitoring that the relationship between predicted and actual current is close \nto linear. \n\nC \n\nD \n\n \n\n \n\nFigure 4. Linear decoding of experimentally recorded spike trains. A. Waveform of somatic \ncurrent (blue), resulting spike train (black), and the linearly decoded waveform (red) from dataset \n1. B. Top: Optimal linear filter for the trace in A, is representative of other datasets as well. \nBottom: Typical EPSPs have a shape similar to the decoding filter (adapted from [26]). C-D. \nPower spectra of the somatic current (blue), the decdoding error of the biological neuron (black), \nthe Poisson encoder (green), and IF neuron (red) for dataset 1 (C) dataset 3 (D). \n \nNext, we analyzed the spectral distribution of the reconstruction error calculated by subtracting the \ndecoded spike train, i.e. convolved with the computed optimal linear filter, from the somatic \ncurrent. We found that at low frequencies the error power is significantly lower than in the input \nsignal, Figure 4C,D. This observation confirms that signals below the dendritic cut-off frequency \nof 20-30Hz can be efficiently communicated using spike trains. \n\nTo quantify the effect of noise-shaping we computed information capacity of different encoders: \n\n0102030405060708090102103Frequency [Hz]Spectral power, a.u. 010203040506070809010010-410-310-210-1100101102Frequency [Hz]Spectral power, a.u. somatic currentbiological neuron errorPoisson encoder errorI&F neuron error\f \n\n \n\n \n \n\n \n\n \n\nwhere S(f) and N(f) are the power spectra of the somatic current and encoding error \ncorrespondingly and the sum is computed only over the frequencies for which S(f) > N(f). \nBecause the plots in Figure 4C,D use semi-logrithmic scale, the information capacity can be \nestimated from the area between a somatic current (blue) power spectrum and an error power \nspectrum. We find that the biological spike generation mechanism has higher information capacity \nthan the Poisson encoder and IF neurons. Therefore, neurons act as AD converters with stronger \nnoise-shaping than IF neurons. \n\nWe now return to the predictive nature of the spike generation mechanism. Given the causal nature \nof the spike generation mechanism it is surprising that the optimal filters for all three datasets \ncarry most of their weight following a spike, Figure 4B. This indicates that the spike generation \nmechanism is capable of making predictions, which are possible in these experiments because \nsomatic currents are temporally correlated. We note that these observations make delay-free \nreconstruction of the signal possible, thus allowing fast operation of neural circuits [27]. \n\nis only possible \n\nThe predictive nature of the encoder can be captured by a \uf044\uf053 modulator embedded in a predictive \ncoding feedback loop [28], Figure 5A. We verified by simulation that such a nested architecture \ngenerates a similar optimal linear filter with most of its weight in the time following a spike, \nFigure 5A right. Of course such \nprediction \nfor \ncorrelated inputs implying that the \nshape of the optimal linear filter \ndepends on the statistics of the \ninputs. The role of predictive coding \nis to reduce the dynamic range of the \nsignal that enters \uf044\uf053, thus avoiding \noverloading. A possible biological \nimplementation for such integrating \nCa2+ \nfeedback \nconcentration and Ca2+ dependent \npotassium channels [25, 29]. \n\ncould \n\nbe \n\nFigure 5. Enhanced \uf044\uf053 modulators. A. \uf044\uf053 modulator combined with predictive coder. In such \ndevice, the optimal decoding filter computed for correlated inputs has most of its weight following \na spike, similar to experimental measurements, Figure 4B. B. Second-order \uf044\uf053 modulator \npossesses stronger noise-shaping properties. Because such circuit contains an internal state \nvariable it generates a non-periodic spike train in response to a constant input. Bottom trace shows \na typical result of a simulation. Black \u2013 spikes, blue \u2013 input current. \n\n4 . Possible reasons for current rectification: energy efficiency and de-noising \n\nWe have shown that at high firing rates biological neurons encode somatic current into a linearly \ndecodable spike train. However, at low firing rates linear decoding cannot faithfully reproduce the \nsomatic current because of rectification in the spike generation mechanism. If the objective of \nspike generation is faithful AD conversion, why would such rectification exist? We see two \npotential reasons: energy efficiency and de-noising. \n\nIt is widely believed that minimizing metabolic costs is an important consideration in brain design \nand operation [30, 31]. Moreover, spikes are known to consume a significant fraction of the \nmetabolic budget [30, 32] placing a premium on their total number. Thus, we can postulate that \nneuronal spike trains find a trade-off between the mean squared error in the decoded spike train \nrelative to the input signal and the total number of spikes, as expressed by the following cost \nfunction over a time interval T: \n\n \n \n\n \n\n \n \n\n \n\n \n\n \n\n \n \n\n \n\n, \n\n (3) \n\nwhere x is the analog input signal, s is the binary spike sequence composed of zeros and ones, and \n is the linear filter. \n\n\fTo demonstrate how solving Eq.(3) would lead to thresholding, let us consider a simplified \nversion taken over a Nyquist period, during which the input signal stays constant: \n\n \n\n \n\n \n\n \n\n \n\n (4) \n\nwhere and normalized by w. Minimizing such a cost function reduces to choosing the lowest \nlying parabola for a given , Figure 6A. Therefore, thresholding is a natural outcome of \nminimizing a cost function combining the decoding error and the energy cost, Eq.(3). \n\nIn addition to energy efficiency, there may be a computational reason for thresholding somatic \ncurrent in neurons. To illustrate this point, we note that the cost function in Eq. (3) for continuous \nvariables, st, may be viewed as a non-negative version of the L1-norm regularized linear \nregression called LASSO [33], which is commonly used for de-noising of sparse and Laplacian \nsignals [34]. Such cost function can be minimized by iteratively applying a gradient descent and a \nshrinkage steps [35], which is equivalent to thresholding (one-sided in case of non-negative \nvariables), Figure 6B,C. Therefore, neurons may be encoding a de-noised input signal. \n\nFigure 6. Possible reasons for rectification in neurons. A. Cost function combining encoding \nerror squared with metabolic expense vs. input signal for different values of the spike number N, \nEq.(4). Note that the optimal number of spikes jumps from zero to one as a function of input. B. \nEstimating most probable \u201cclean\u201d signal value for continuous non-negative Laplacian signal and \nGaussian noise, Eq.(3) (while setting w = 1). The parabolas (red) illustrate the quadratic log-\nlikelihood term in (3) for different values of the measurement, s, while the linear function (blue) \nreflects the linear log-prior term in (3). C. The minimum of the combined cost function in B is at \nzero if s \uf03c\uf020\uf06c, and grows linearly with s, if s >\uf020\uf06c. \n\n \n\n5 . D i s c u s s i o n \n\nIn this paper, we demonstrated that the neuronal spike-generation mechanism can be viewed as an \noversampling and noise-shaping AD converter, which encodes a rectified low-pass filtered \nsomatic current as a digital spike train. Rectification by the spike generation mechanism may \nsubserve both energy efficiency and de-noising. As the degree of noise-shaping in biological \nneurons exceeds that in IF neurons, or basic \uf044\uf053, we suggest that neurons should be modeled by \nmore advanced \uf044\uf053 modulators, e.g. Figure 5B. Interestingly, \uf044\uf053 modulators can be also viewed as \ncoders with error prediction feedback [19]. \n\nMany publications studied various aspects of spike generation in neurons yet we believe that the \nframework [13-15] we adopt is different and discuss its relationship to some of the studies. Our \nframework is different from previous proposals to cast neurons as predictors [36, 37] because a \ndifferent quantity is being predicted. The possibility of perfect decoding from a spike train with \ninfinite temporal precision has been proven in [38]. Here, we are concerned with a more practical \nissue of how reconstruction error scales with the over-sampling ratio. Also, we consider linear \ndecoding which sets our work apart from [39]. Finally, previous experiments addressing noise-\nshaping [40] studied the power spectrum of the spike train rather than that of the encoding error. \n\nOur work is aimed at understanding biological and computational principles of spike-generation \nand decoding and is not meant as a substitute for the existing phenomenological spike-generation \nmodels [41], which allow efficient fitting of parameters and prediction of spike trains [42]. Yet, \nthe theoretical framework [13-15] we adopt may assist in building better models of spike \ngeneration for a given somatic current waveform. First, having interpreted spike generation as AD \nconversion, we can draw on the rich experience in signal processing to attack the problem. \nSecond, this framework suggests a natural metric to compare the performance of different spike \ngeneration models in the high firing rate regime: a mean squared error between the injected \n\n\fcurrent waveform and the filtered version of the spike train produced by a model provided the total \nnumber of spikes is the same as in the experimental data. The AD conversion framework adds \njustification to the previously proposed spike distance obtained by subtracting low-pass filtered \nspike trains [43]. \n\nAs the framework [13-15] we adopt relies on viewing neuronal computation as an analog-digital \nhybrid, which requires AD and DA conversion at every step, one may wonder about the reason for \nsuch a hybrid scheme. Starting with the early days of computers, the analog mode is known to be \nadvantageous for computation. For example, performing addition of many variables in one step is \npossible in the analog mode simply by Kirchhoff law, but would require hundreds of logical gates \nin the digital mode [44]. However, the analog mode is vulnerable to noise build-up over many \nstages of computation and is inferior in precisely communicating information over long distances \nunder limited energy budget [30, 31]. While early analog computers were displaced by their digital \ncounterparts, evolution combined analog and digital modes into a computational hybrid [44], thus \nnecessitating efficient AD and DA conversion, which was the focus of the present study. \n\nWe are grateful to L. Abbott, S. Druckmann, D. Golomb, T. Hu, J. Magee, N. Spruston, B. \nTheilman for helpful discussions and comments on the manuscript, to X.-J. Wang, D. McCormick, \nK. Nagel, R. Wilson, K. Padmanabhan, N. Urban, S. Tripathy, H. Koendgen, and M. Giugliano \nfor sharing their data. The work of D.S. was partially supported by the Intel Collaborative \nResearch Institute for Computational Intelligence (ICRI-CI). \n\nR e f e r e n c e s \n1. \n2. \n\nFerster, D. and N. Spruston, Cracking the neural code. Science, 1995. 270: p. 756-7. \nPanzeri, S., et al., Sensory neural codes using multiplexed temporal scales. Trends \nNeurosci, 2010. 33(3): p. 111-20. \nStevens, C.F. and A. Zador, Neural coding: The enigma of the brain. Curr Biol, \n1995. 5(12): p. 1370-1. \nShadlen, M.N. and W.T. Newsome, The variable discharge of cortical neurons: \nimplications for connectivity, computation, and information coding. J Neurosci, \n1998. 18(10): p. 3870-96. \nShadlen, M.N. and W.T. Newsome, Noise, neural codes and cortical organization. \nCurr Opin Neurobiol, 1994. 4(4): p. 569-79. \nSinger, W. and C.M. Gray, Visual feature integration and the temporal correlation \nhypothesis. Annu Rev Neurosci, 1995. 18: p. 555-86. \nMeister, M., Multineuronal codes in retinal signaling. Proc Natl Acad Sci U S A, \n1996. 93(2): p. 609-14. \nCook, E.P., et al., Dendrite-to-soma input/output function of continuous time-\nvarying signals in hippocampal CA1 pyramidal neurons. J Neurophysiol, 2007. \n98(5): p. 2943-55. \nKondgen, H., et al., The dynamical response properties of neocortical neurons to \ntemporally modulated noisy inputs in vitro. Cereb Cortex, 2008. 18(9): p. 2086-97. \nTchumatchenko, T., et al., Ultrafast population encoding by cortical neurons. J \nNeurosci, 2011. 31(34): p. 12171-9. \nMainen, Z.F. and T.J. Sejnowski, Reliability of spike timing in neocortical neurons. \nScience, 1995. 268(5216): p. 1503-6. \nMar, D.J., et al., Noise shaping in populations of coupled model neurons. Proc Natl \nAcad Sci U S A, 1999. 96(18): p. 10450-5. \nShin, J., Adaptive noise shaping neural spike encoding and decoding. \nNeurocomputing, 2001. 38-40: p. 369-381. \nShin, J., The noise shaping neural coding hypothesis: a brief history and \nphysiological implications. Neurocomputing, 2002. 44: p. 167-175. \nShin, J.H., Adaptation in spiking neurons based on the noise shaping neural coding \nhypothesis. Neural Networks, 2001. 14(6-7): p. 907-919. \nSchreier, R. and G.C. Temes, Understanding delta-sigma data converters2005, \nPiscataway, NJ: IEEE Press, Wiley. xii, 446 p. \nCandy, J.C., A use of limit cycle oscillations to obtain robust analog -to-digital \nconverters. IEEE Trans. Commun, 1974. COM-22: p. 298-305. \n\n3. \n\n4. \n\n5. \n\n6. \n\n7. \n\n8. \n\n9. \n\n10. \n\n11. \n\n12. \n\n13. \n\n14. \n\n15. \n\n16. \n\n17. \n\n\fInose, H., Y. Yasuda, and J. Murakami, A telemetring system code modulation - \uf044\uf053\uf020\nmodulation. IRE Trans. Space Elect. Telemetry, 1962. SET-8: p. 204-209. \nSpang, H.A. and P.M. Schultheiss, Reduction of quantizing noise by use of feedback. \nIRE TRans. Commun. Sys., 1962: p. 373-380. \nHovin, M., et al., Delta-Sigma modulation in single neurons, in IEEE International \nSymposium on Circuits and Systems2002. \nCheung, K.F. and P.Y.H. Tang, Sigma-Delta Modulation Neural Networks. Proc. \nIEEE Int Conf Neural Networkds, 1993: p. 489-493. \nPadmanabhan, K. and N. Urban, Intrinsic biophysical diversity decorelates neuronal \nfiring while increasing information content. Nat Neurosci, 2010. 13: p. 1276-82. \nUrban, N. and S. Tripathy, Neuroscience: Circuits drive cell diversity. Nature, 2012. \n488(7411): p. 289-90. \nNagel, K.I. and R.I. Wilson, personal communication. \nShin, J., C. Koch, and R. Douglas, Adaptive neural coding dependent on the time-\nvarying statistics of the somatic input current. Neural Comp, 1999. 11: p. 1893-913. \nMagee, J.C. and E.P. Cook, Somatic EPSP amplitude is independent of synapse \nlocation in hippocampal pyramidal neurons. Nat Neurosci, 2000. 3(9): p. 895-903. \nThorpe, S., D. Fize, and C. Marlot, Speed of processing in the human visual system. \nNature, 1996. 381(6582): p. 520-2. \nTewksbury, S.K. and R.W. Hallock, Oversample, linear predictive and noise-\nshaping coders of order N>1. IEEE Trans Circuits & Sys, 1978. CAS25: p. 436-47. \nWang, X.J., et al., Adaptation and temporal decorrelation by single neurons in the \nprimary visual cortex. J Neurophysiol, 2003. 89(6): p. 3279-93. \nAttwell, D. and S.B. Laughlin, An energy budget for signaling in the grey matter of \nthe brain. J Cereb Blood Flow Metab, 2001. 21(10): p. 1133-45. \nLaughlin, S.B. and T.J. Sejnowski, Communication in neuronal networks. Science, \n2003. 301(5641): p. 1870-4. \nLennie, P., The cost of cortical computation. Curr Biol, 2003. 13(6): p. 493-7. \nTibshirani, R., Regression shrinkage and selection via the Lasso. Journal of the \nRoyal Statistical Society Series B-Methodological, 1996. 58(1): p. 267-288. \nChen, S.S.B., D.L. Donoho, and M.A. Saunders, Atomic decomposition by basis \npursuit. Siam Journal on Scientific Computing, 1998. 20(1): p. 33-61. \nElad, M., et al., Wide-angle view at iterated shrinkage algorithms. P SOc Photo-Opt \nIns, 2007. 6701: p. 70102. \nDeneve, S., Bayesian spiking neurons I: inference. Neural Comp, 2008. 20: p. 91. \nYu, A.J., Optimal Change-Detection and Spinking Neurons, in NIPS, B. Scholkopf, \nJ. Platt, and T. Hofmann, Editors. 2006. \nLazar, A. and L. Toth, Perfect Recovery and Sensitivity Analysis of Time Encoded \nBandlimited Signals. IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS, \n2004. 51(10). \nPfister, J.P., P. Dayan, and M. Lengyel, Synapses with short-term plasticity are \noptimal estimators of presynaptic membrane potentials. Nat Neurosci, 2010. 13(10): \np. 1271-5. \nChacron, M.J., et al., Experimental and theoretical demonstration of noise shaping \nby \nin Biological, \nBiophysical, and Biomedical Systems III, 2005. 5841: p. 150-163. \nPillow, J., Likelihood-based approaches to modeling the neural code, in Bayesian \nBrain: Probabilistic Approaches to Neural Coding, K. Doya, et al., Editors. 2007, \nMIT Press. \nJolivet, R., et al., A benchmark test for a quantitative assessment of simple neuron \nmodels. J Neurosci Methods, 2008. 169(2): p. 417-24. \nvan Rossum, M.C., A novel spike distance. Neural Comput, 2001. 13(4): p. 751-63. \nSarpeshkar, R., Analog versus digital: extrapolating \nneurobiology. Neural Computation, 1998. 10(7): p. 1601-38. \n\ninterval correlations. Fluctuations and Noise \n\ninterspike \n\nfrom electronics \n\nto \n\n18. \n\n19. \n\n20. \n\n21. \n\n22. \n\n23. \n\n24. \n25. \n\n26. \n\n27. \n\n28. \n\n29. \n\n30. \n\n31. \n\n32. \n33. \n\n34. \n\n35. \n\n36. \n37. \n\n38. \n\n39. \n\n40. \n\n41. \n\n42. \n\n43. \n44. \n\n \n\n\f", "award": [], "sourceid": 262, "authors": [{"given_name": "Dmitri", "family_name": "Chklovskii", "institution": null}, {"given_name": "Daniel", "family_name": "Soudry", "institution": null}]}