{"title": "Estimating time-varying input signals and ion channel states from a single voltage trace of a neuron", "book": "Advances in Neural Information Processing Systems", "page_first": 217, "page_last": 225, "abstract": "State-of-the-art statistical methods in neuroscience have enabled us to fit mathematical models to experimental data and subsequently to infer the dynamics of hidden parameters underlying the observable phenomena. Here, we develop a Bayesian method for inferring the time-varying mean and variance of the synaptic input, along with the dynamics of each ion channel from a single voltage trace of a neuron. An estimation problem may be formulated on the basis of the state-space model with prior distributions that penalize large fluctuations in these parameters. After optimizing the hyperparameters by maximizing the marginal likelihood, the state-space model provides the time-varying parameters of the input signals and the ion channel states. The proposed method is tested not only on the simulated data from the Hodgkin-Huxley type models but also on experimental data obtained from a cortical slice in vitro.", "full_text": "Estimating time-varying input signals and ion\n\nchannel states from a single voltage trace of a neuron\n\nDepartment of Human and Computer Intelligence, Ritsumeikan University\n\nRyota Kobayashi\u2217\n\nSiga 525-8577, Japan\n\nkobayashi@cns.ci.ritsumei.ac.jp\n\nYasuhiro Tsubo\n\nLaboratory for Neural Circuit Theory, Brain Science Institute, RIKEN\n\n2-1 Hirosawa Wako, Saitama 351-0198, Japan\n\nyasuhirotsubo@riken.jp\n\nInstitute of Physiology, Academy of Sciences of the Czech Republic\n\nVidenska 1083, 142 20 Prague 4, Czech Republic\n\nPetr Lansky\n\nlansky@biomed.cas.cz\n\nShigeru Shinomoto\n\nDepartment of Physics, Kyoto University\n\nKyoto 606-8502, Japan\n\nshinomoto@scphys.kyoto-u.ac.jp\n\nAbstract\n\nState-of-the-art statistical methods in neuroscience have enabled us to \ufb01t math-\nematical models to experimental data and subsequently to infer the dynamics of\nhidden parameters underlying the observable phenomena. Here, we develop a\nBayesian method for inferring the time-varying mean and variance of the synaptic\ninput, along with the dynamics of each ion channel from a single voltage trace of a\nneuron. An estimation problem may be formulated on the basis of the state-space\nmodel with prior distributions that penalize large \ufb02uctuations in these parameters.\nAfter optimizing the hyperparameters by maximizing the marginal likelihood, the\nstate-space model provides the time-varying parameters of the input signals and\nthe ion channel states. The proposed method is tested not only on the simulated\ndata from the Hodgkin\u2212Huxley type models but also on experimental data ob-\ntained from a cortical slice in vitro.\n\n1 Introduction\n\nOwing to the great advancements in measurement technology, a huge amount of data is generated\nin the \ufb01eld of science, engineering, and medicine, and accordingly, there is an increasing demand\nfor estimating the hidden states underlying the observed signals. Neurons transmit information by\ntransforming synaptic inputs into action potentials; therefore, it is essential to investigate the dynam-\nics of the synaptic inputs to understand the mechanism of the information processing in neuronal\nsystems. Here we propose a method to deduce the dynamics from experimental data.\n\n\u2217\n\nWebpage: http://www.ritsumei.ac.jp/\u223cr-koba84/index.html\n\n1\n\n\fCortical neurons in vivo receive synaptic bombardments from thousands of neurons, which cause the\nmembrane voltage to \ufb02uctuate irregularly. As each synaptic input is small and the synaptic input rate\nis high, the total input can be characterized only with its mean and variance, as in the mathematical\ndescription of Brownian motion of a small particle suspended in a \ufb02uid. Given the information of\nthe mean and variance of the synaptic input, it is possible to estimate the underlying excitatory and\ninhibitory \ufb01ring rates from respective populations of neurons.\nThe membrane voltage \ufb02uctuations in a neuron are caused not only by the synaptic input but also\nby the hidden dynamics of ionic channels. These dynamics can be described by conductance-based\nmodels, including the Hodgkin\u2212Huxley model. Many studies have been reported on the dynamics\nof ionic channels and their impact on neural coding properties [1].\nThere have been attempts to decode a voltage trace in terms of input parameters; the maximum like-\nlihood estimator for current inputs was derived under an assumption of linear leaky integration [2, 3].\nEmpirical attempts were made to infer conductance inputs by \ufb01tting an approximate distribution of\nthe membrane voltage to the experimental data [4, 5]. A linear regression method was proposed to\ninfer the maximal ionic conductances and single synaptic inputs in the dendrites [6]. In all studies,\nthese input parameters were assumed to be constant in time. In practice, however, such assumption\nof the constancy of input parameters is too strong simpli\ufb01cation for the neuronal \ufb01ring [7, 8].\nIn this paper, we propose a method for the simultaneous identi\ufb01cation of the time-varying input\nparameters and of the ion-channels dynamics from a single voltage trajectory. The problem is ill-\nposed, in the sense that the set of parameters giving rise to a particular voltage trace cannot be\nuniquely determined. However, the problem may be formulated as a statistical problem of estimating\nthe hidden state using a state-space model and then it is solvable. We verify the proposed method\nby applying it not only to numerical data obtained from the Hodgkin\u2212Huxley type models but also\nto the biological data obtained in in vitro experiment.\n\n2 Model\n\n2.1 Conductance-based model\n\nWe start from the conductance-based neuron model [1]:\n\n\u2211\n\n= \u2212\u00afgleak(V \u2212 Eleak) \u2212\n\ndV\ndt\nwhere, \u00afgleak =: gleak/Cm, Jion =: Iion/Cm, Jsyn(t) := Isyn(t)/Cm,\n\nJion(V, \u20d7w) + Jsyn(t),\n\nion\n\n(1)\n\nV is the membrane voltage, \u00afgleak is the normalized leak conductance, Eleak is the reversal potential,\nJion are the voltage-dependent ionic inputs, \u20d7w := (w1, w2,\u00b7\u00b7\u00b7 , wd) are the gating variables that\ncharacterize the states of ion channels, Jsyn is a synaptic input, Cm is the membrane capacitance,\nIion are the voltage-dependent ionic currents and Isyn(t) is a synaptic input current. The ionic\ninputs Jion are a nonlinear function of V and \u20d7w. Each gating variable wi (i = 1,\u00b7\u00b7\u00b7 , d) follows the\nLangevin equation [9]:\n\ndwi\ndt\n\n= \u03b1i(V )(1 \u2212 wi) \u2212 \u03b2i(V )wi + si\u03bei(t),\n\n(2)\nwhere \u03b1i(V ), \u03b2i(V ) are nonlinear functions of the voltage, si is the standard deviation of the channel\nnoise, and \u03bei(t) is an independent Gaussian white noise with zero mean and unit variance. The\nsynaptic input Jsyn(t) is the sum of the synaptic inputs from a large number of presynaptic neurons.\nIf each synaptic input is weak and the synaptic time constants are small, we can adopt a diffusion\napproximation [10],\n\n(3)\nwhere \u00b5(t), \u03c3(t) are the instantaneous mean and standard deviation of the synaptic input, and \u03c7(t)\nis Gaussian white noise with zero mean and unit variance. The components \u00b5(t) and \u03c32(t) are\nconsidered to be the input signals to a neuron.\n\nJsyn(t) = \u00b5(t) + \u03c3(t)\u03c7(t),\n\n2.2 Estimation Problem\nThe problem is to \ufb01nd the parameters of model (1-3) from a single voltage trace {V (t)}. There are\nthree kinds of parameters in the model. The \ufb01rst kind is the input signals {\u00b5(t), \u03c32(t)}. The second\n\n2\n\n\fkind is the gating variables { \u20d7w(t)} that characterize the activity of the ionic channels. The remaining\nparameters are the intrinsic parameters of a neuron, such as the standard deviation of the channel\nnoise, the functional form of voltage-dependent ionic inputs, and that of the rate constants. Some\nof these parameters, i.e., Jion(V, \u20d7w), \u03b1i(V ), \u03b2i(V ), \u00afgleak and Eleak are measurable by additional\nexperiments. After determining such intrinsic parameters of the third group by separate experiments,\nwe estimate parameters of the \ufb01rst and second group from a single voltage trace.\n\n3 Method\n\nBecause of the ill-posedness of the estimation problem, we cannot determine the input signals from\na voltage trace alone. To overcome this, we introduce random-walk-type priors for the input signals.\nThen, we determine hyperparameters using the EM algorithm. Finally, we evaluate the Bayesian\nestimate for the input signals and the ion channel states with the Kalman \ufb01lter and smoothing algo-\nrithm. Figure 1 is a schematic of the estimation method.\n\n3.1 Priors for Estimating Input Parameters\n\nLet us assume, for the sake of simplicity, that the voltage is sampled at N equidistant steps \u03b4t,\ndenoting by Vj the observed voltage at time j\u03b4t. To apply the Bayesian approach, the conductance\nbased model (1, 3) is modi\ufb01ed into the discretized form:\n\n}\n\n\u221a\n\n\u03b4t +\n\nSj\u03b4t\u03b7j,\n\nVj+1 = Vj +\n\n(4)\nwhere {Mj, Sj} are random functions of time, \u03b7j is a standard Gaussian random variable. It is\nnot possible to infer a large set of parameters {Mj, Sj} from a single voltage trace {Vj} alone,\nbecause the number of parameters overwhelms the number of data points. To resolve it, we introduce\nrandom-walk-type priors, i.e. we assume that the random functions are suf\ufb01ciently smooth to satisfy\nthe following conditions [11]:\n\nJion(Vj, \u20d7wj) + Mj\n\n(5)\n(6)\nwhere \u03b3M and \u03b3S are hyperparameters that regulate the smoothness of M(t) and S(t), respectively,\nand N(\u00b5, \u03c32) represents the Gaussian distribution with mean \u00b5 and variance \u03c32.\n\nP [Mj+1|Mj = m] \u223c N(m, \u03b32\nP [Sj+1|Sj = s] \u223c N(s, \u03b32\nS\u03b4t),\n\nM \u03b4t),\n\n{\n\u2212\u00afgleak(Vj \u2212 Eleak) \u2212\n\n\u2211\n\nion\n\n3.2 Formulation as a State Space model\n\nThe model described in the previous sections could be represented as the state-space model, in which\n\u20d7xj \u2261 (Mj, Sj, \u20d7wj) are the (d + 2)-dimensional states, and Zj \u2261 Vj+1 \u2212 Vj (j = 1,\u00b7\u00b7\u00b7 , N \u2212 1) are\nthe observations. The kinetic equations (2) and the prior distributions (5, 6) can be rewritten as\n\nwhere\n\nFj = diag(1, 1, a1;j, a2;j,\u00b7\u00b7\u00b7 , ad;j), G = diag(\u03b3M\n\u20d7uj = (0, 0, b1;j, b2;j,\u00b7\u00b7\u00b7 , bd;j)T ,\n\n\u20d7xj+1 = Fj\u20d7xj + \u20d7uj + G\u20d7\u03b7j,\n\u221a\n\n\u221a\n\n\u221a\n\n\u221a\n\n\u03b4t, \u03b3S\n\n\u03b4t, s1\n\n\u03b4t, s2\n\n(7)\n\n\u221a\n\n\u03b4t),\n\n\u03b4t,\u00b7\u00b7\u00b7 , sd\n\nFj and G are (d + 2) \u00d7 (d + 2) diagonal matrices, \u20d7uj is (d + 2)-dimensional vector, and \u20d7\u03b7j is a\n(d + 2)-dimensional independent Gaussian random vector with zero mean and unit variance.\nai,j and bi,j is given by\n\nai,j = 1 \u2212 {\u03b1i(Vj) + \u03b2i(Vj)}\u03b4t, bi,j = \u03b1i(Vj)\u03b4t,\n\nThe observation equation is obtained from Eq. (4):\n\nZj = \u2212\u00afgleak(Vj \u2212 Eleak)\u03b4t \u2212\n\nJion(Vj, \u20d7wj)\u03b4t + Mj\u03b4t +\n\n\u2211\n\n\u221a\n\nSj\u03b4t\u03bej,\n\n(8)\n\nwhere \u03bej is an independent Gaussian random variable with zero mean and unit variance. In the\nestimation problem, only {Vj}N\nj=1 are the hidden variables because it\ncannot be observed in a experiment.\n\nj=1 are observable. {\u20d7xj}N\n\nion\n\n3\n\n\fFigure 1: A schema of the estimation procedure: A conductance-based model neuron [12] is driven\nby a \ufb02uctuating input of the mean \u00b5(t) and variance \u03c32(t) varying in time. The \u00b5(t) (black line) and\nthe \u00b5(t) \u00b1 \u03c3(t) (black dotted lines) are depicted in the second panel from the top. We estimate the\ninput signals {\u00b5(t), \u03c32(t)} and the gating variables {m(t), h(t), n(t), p(t)} from a single voltage\ntrace (blue line). The estimated results are shown in the bottom panels. The input signals are in the\ntwo panels and the ion channel states are in the right shaded box. Gray dashed lines are the true\nvalues and red lines are their estimates.\n\n3.3 Hyperparameter Optimization\n\nWe determine d + 2 hyperparameters \u20d7q := (\u03b32\nd) by maximizing the marginal like-\nlihood via the EM algorithm [13]. We maximize the likelihood integrated over hidden variables\n{\u20d7xt}N\u22121\nt=1 ,\n\nM , \u03b32\n\nS, s2\n\n\u20d7qML = argmax\n\n\u20d7q\n\np(Z1:N\u22121|\u20d7q) = argmax\n\np(Z1:N\u22121, \u20d7x1:N\u22121|\u20d7q)d\u20d7x1:N\u22121,\n\n(9)\n\n1,\u00b7\u00b7\u00b7 , s2\n\u222b\n\n\u20d7q\n\n4\n\n\fwhere Z1:N\u22121 := {Zj}N\u22121\nj=1 d\u20d7xj. The maximization\ncan be achieved by iteratively maximizing the Q function, the conditional expectation of the log\nlikelihood:\n\nj=1 , and d\u20d7x1:N\u22121 := \u03a0N\u22121\n\nj=1 , \u20d7x1:N\u22121 := {\u20d7xj}N\u22121\n\nQ(\u20d7q|\u20d7qk),\n\n\u20d7qk+1 = argmax\nwhere Q(\u20d7q|\u20d7qk) := E[log(P [Z1:N\u22121, \u20d7x1:N\u22121|\u20d7q])|Z1:N\u22121, \u20d7qk],\n\u20d7qk is the kth iterated estimate of \u20d7q, E[X|Y ] is the conditional expectation of X given the value of\nY , and P [X|Y ] is the conditional probability distribution of X given the value of Y .\nThe Q function can be written as\n\n(10)\n\n\u20d7q\n\nQ(\u20d7q|\u20d7qk) =\n\nE[log(P [Zj|\u20d7xj]) |Z1:N\u22121, \u20d7qk] +\n\nE[log(P [\u20d7xj+1|\u20d7xj, \u20d7q]) |Z1:N\u22121, \u20d7qk]. (11)\n\nN\u22122\u2211\n\nj=1\n\nN\u22121\u2211\n\nj=1\n\nThe (k + 1) th iterated estimate of \u20d7q is determined by the conditions for \u2202Q/\u2202qi = 0:\n\nqi,k+1 =\n\n1\n\n(N \u2212 2)\u03b4t\n\nE[(xi,j+1 \u2212 fi,jxi,j \u2212 ui,j)2|Z1:N\u22121, \u20d7qk],\n\n(12)\n\nN\u22122\u2211\n\nj=1\n\nwhere qi,k+1 is the ith component of the \u20d7qk+1, xi,j is the ith component of \u20d7xj, fi,j is the ith diagonal\ncomponent of Fj, and ui,j is the ith component of \u20d7uj. As the EM algorithm increases the marginal\nlikelihood at each iteration, the estimate converges to a local maximum. We calculate the conditional\nexpectations in Eq.(12) using Kalman \ufb01lter and smoothing algorithm [11, 14, 15, 16, 17].\n\n3.4 Bayesian estimator for the input signal\n\nAfter \ufb01tting the hyperparameters, we evaluate the Bayesian estimator for the input signals and the\ngating variables:\n\nj = E[\u20d7xj|Z1:N\u22121, \u20d7q],\n\u2217\n\u20d7x\n\n(13)\n\u2217\nj is the Bayesian estimator for \u20d7xj. Using this estimator, we can estimate not only the\nwhere \u20d7x\n(smoothly) time-varying mean and variance of the synaptic input {\u00b5(t), \u03c32(t)}, but also the time\nevolution of the gating variables \u20d7w(t). We evaluate the estimator (13) using a Kalman \ufb01lter and\nsmoothing algorithm [11, 14, 15, 16, 17].\n\n4 Applications\n\n4.1 Estimating time-varying input signals and ion channel states in a conductance-based\n\nmodel\n\nTo test the accuracy and robustness of our method, we applied the proposed method to simulated\nvoltage traces. We adopted a Hodgkin\u2212Huxley model with microscopic description of ionic chan-\nnels [18], which consists of two ionic inputs Jion (ion \u2208 {Na, Kd}): JNa = \u03b3Na[m3h1](V \u2212 ENa)\nand JKd = \u03b3K[n4](V \u2212 EK), where \u03b3Na(K) is the conductance of a single sodium (potassium)\nion channel in the open state, [m3h1] ([n4]) is the number of sodium (potassium) channels that are\nopen and ENa(K) is the sodium (potassium) reversal potential. There are 8 (5) states in a sodium\n(potassium) channel and the state transitions are described by a Markov chain model. Details of this\nmodel can be found in [18].\nFirst, we apply the proposed method to sinusoidally modulated input signals. Figure 2B compares\nthe time-varying input signals {\u00b5(t), \u03c32(t)} with their estimate and Figure 2C compares the open\nprobability of each ion channel with its estimate. It is observed in this case that the method provides\nthe accurate estimate. Second, we examine whether the method can also work in the presence\nof discontinuity in the input signals. Though discontinuous inputs do not satisfy the smoothness\nassumption (5, 6), the method gives accurate estimates (Figure 3A). Third, the estimation method is\nE,j)(VE \u2212 V (t))+\napplied to conductance input model, which is given by Jsyn(t) = \u00afgE\nI,j)(VI \u2212 V (t)), where the subscript E(I) means the excitatory (inhibitory) synapse,\n\u00afgI\n\n\u2211\nj,k \u03b4(t\u2212 tk\n\n\u2211\nj,k \u03b4(t\u2212 tk\n\n5\n\n\f\u00afgE(I) is the normalized postsynaptic conductance, VE(I) is the reversal potential and tk\nE(I),j is the\nkth spike time of the jth presynaptic neuron, and \u03b4(t) is the Dirac delta function. It can be seen\nfrom Figure 3B that the method can provide accurate estimate except during action potentials when\nthe input undergoes a rapid modulation. Fourth, the effect of observation noise on the estimation\naccuracy is investigated. We introduce an observation noise in the following manner: Zobs,j =\nZj + \u03c3obs\u03b7j, where Zobs,j =: Vobs,j+1 \u2212 Vobs,j is the observed value, Vobs,j is the recorded voltage\nat time step j, \u03c3obs is the standard deviation of the observation noise and \u03b7j is an independent\nGaussian random variable with zero mean and unit variance. Mathematically, it is equivalent to\nassume the observation noise as an additive Gaussian white noise on the voltage. In such a case, the\nestimation method reckons the input variance at the sum of the original input variance \u03c32(t) and the\nobservation noise variance \u03c32\nFurthermore, we also tested the present framework in its potential applicability to more compli-\ncated conductance-based models, which have slow ionic currents. To observe this, we adopted a\nconductance-based model proposed by Pospischil et al. [12] that has three ionic inputs Jion (ion \u2208\n{Na, Kd, M}): JNa = \u00afgNam3h(V \u2212 ENa), JKd = \u00afgKdn4(V \u2212 EK) and JM = \u00afgMp(V \u2212 EK),\nwhere {m, h, n, p} are the gating variables, \u00afgion represents the normalized ionic conductances and\nEion are the reversal potentials. (See [12] for details.) An example of the estimation result is shown\nin Figure 1.\n\nobs (Figure 3C).\n\nFigure 2: Estimation of input signals and ion channel states from the simulated data: A. Voltage\nTrace. B. Estimate of the mean \u00b5 and variance \u03c32 input signals. C. Estimate of the ion channel\nstates. The time evolution of the open probabilities of sodium (Na) and potassium (K) channels are\nshown. The gray dashed lines and red lines represent the true and the estimates, respectively.\n\n4.2 Estimating time-varying input signals and ion channel states in experimental data\n\nWe applied the proposed method to experimental data. Randomly \ufb02uctuating current, generated by\nthe sum of the \ufb01ltered time-dependent Poisson process, was injected to a neuron in the rat motor\ncortex and the membrane voltage was recorded intracellularly in vitro. Details of the experimental\nprocedure can be found in [19, 20]. We adopted the neuron model proposed by Pospischil et\nal. [12] for the membrane voltage. After tuning the ionic conductances and kinetic parameters, six\nhyperparameters \u03b3M,S and sm,h,n,p were optimized using Eq. (12). For avoiding over-\ufb01tting, we set\nthe upper limit smax = 0.002 for the hyperparameters of the gating variables. The observation noise\nobs = 0.66 [(mV)2/ms].\nvariance was estimated from data recorded in absence of stimulation: \u03c32\nThe variance of the input signal was estimated by subtracting the observation noise variance from\nthe estimated variance. In this way, the mean and standard deviation (SD) of the input as well as\n\n6\n\n\fFigure 3: Robustness of the estimation method: A. Constant input with a jump. B. Conductance\ninput. C. Sinusoidal input with observation noise. Voltage traces used for the estimation and es-\ntimates of the input signals {\u00b5(t), \u03c32(t)} are shown. In A and B, the gray dashed and red lines\nrepresents the true and the estimated input signals, respectively. In C, the blue dotted line repre-\nsents the true input variance \u03c32(t), the gray dotted line represents the sum of the true input variance\nobs = 1.6 [(mV)2/ms], and the red line represents the\nand the true observation noise variance \u03c32\nestimated variance.\n\nthe gating variables were estimated. The time-varying mean and SD of the input are compared with\ntheir estimates in Figure 4B. The results suggest that the proposed method is applicable for these\nexperimental data.\n\n5 Discussion\n\nWe have developed a method for estimating not only the time-varying mean and variance of the\nsynaptic input but also the ion channel states from a single voltage trace of a neuron. It was con-\n\ufb01rmed that the proposed method is capable of providing accurate estimate by applying it to simu-\nlated data. We also tested the general applicability of this method by applying it to experimental\ndata obtained with current injection to a neuron in cortical slice preparation.\nUntil now, several attempts have been made to estimate synaptic input from experimental data [2,\n4, 5, 8, 21, 22]. The new aspects introduced in this paper are the implementation of the state space\nmodel that allows to estimate the input signals to \ufb02uctuate in time and the gating variables that\nvaries according to the voltage. However, the present method can be implemented under several\nsimplifying assumptions, whose validity should be veri\ufb01ed.\nFirst, we approximated the synaptic inputs by white (uncorrelated) noise. In practice, the synaptic\ninputs are conductance-based and inevitably have the correlation of a few milliseconds. We have\ncon\ufb01rmed the applicability of the model to the numerical data generated with conductance input,\nand also the experimental data in which temporally correlated current is injected to a neuron. These\nresults indicate that the white noise assumption in our method robustly applies to the reality.\nSecond, we constructed the state space method by assuming the smooth \ufb02uctuation of the input\nsignals, or equivalently, by penalizing the rapid \ufb02uctuation in the prior distribution. By applying the\npresent method to the case of stepwise change in the input signals, we realized that the method is\nrather robust against an abrupt change.\nThird, we also approximated the channel noise by the white noise. We tested our method by applying\nit to a more realistic Hodgkin\u2212Huxley type model in which the individual channels are modeled by\na Markov chain [18]. It was con\ufb01rmed that the present white noise approximation is acceptable for\nsuch realistic models.\n\n7\n\n\fFigure 4: Estimation of input signals and ion channel states from experimental data. A. Voltage trace\nrecorded intracellularly in vitro. Fluctuating current, sinusoidally modulated mean and standard\ndeviation (SD), was injected to the neuron. B. Estimation of the time-varying mean and SD of the\ninput. The gray dashed and red lines represent the true and the estimates, respectively. C. Estimation\nof the ion channels state. The red lines represent the estimates of the gating variables.\n\nFourth, we ignored the possible nonlinear effects in dendritic conduction such as dendritic spike\nand backpropagating action potential. It would be worthwhile to consider augmenting the model by\ndividing into multiple compartments as has been done in Huys et al. [6].\nFifth, in analyzing experimental data, we employed \ufb01xed functions for the ionic currents and the\nrate constants and assumed that some of the intrinsic parameters are known. It may be possible\nto infer the maximal ionic conductances using the particle \ufb01lter method developed by Huys and\nPaninski [23], but their method is not able to identify the ionic currents and the rate constants. In\nour examination of biological data, we have explored parameters empirically from current-voltage\ndata. It would be an important direction of this study to develop the method such that models are\nselected solely from the voltage trace.\n\nAcknowledgments\n\nThis study was supported by Support Center for Advanced Telecommunications Technology Re-\nsearch, Foundation; Yazaki Memorial Foundation for Science and Technology; and Ritsumeikan\nUniversity Research Funding Research Promoting Program \u201cYoung Scientists (Start-up) (cid:673), \u201cGen-\neral Research\u201d to R.K., Grant-in-Aid for Young Scientists (B) from the MEXT Japan (22700323) to\nY.T., Grants-in-Aid for Scienti\ufb01c Research from the MEXT Japan (20300083, 23115510) to S.S.,\nand the Center for Neurosciences LC554, Grant No. AV0Z50110509 and the Grant Agency of the\nCzech Republic, project P103/11/0282 to P.L.\n\nReferences\n\n[1] Koch, C. (1999) Biophysics of Computation: Information Processing in Single Neurons. Ox-\n\nford University Press.\n\n[2] Lansky, P. (1983) Math. Biosci. 67: 247-260.\n[3] Lansky, P. & Ditlevsen S. (2008) Biol. Cybern. 99: 253-262.\n\n8\n\n\f[4] Rudolph, M., Piwkowska, Z., Badoual, M., Bal, T. & Destexhe, A. (2004) J. Neurophysiol.\n\n91: 2884-2896.\n\n[5] Pospischil, M., Piwkowska, Z., Bal, T. & Destexhe, A. (2009) Neurosci. 158: 545-552.\n[6] Huys, Q.J.M., Ahrens, M.B. & Paninski, L. (2006) J. Neurophysiol. 96: 872-890.\n[7] Shinomoto, S., Sakai, S. & Funahashi, S. (1999) Neural Comput. 11: 935-951.\n[8] DeWeese, M.R. & Zador, A.M. (2006) J. Neurosci. 26: 12206-12218.\n[9] Fox, R.F. (1997) Biophys. J. 72: 2068-2074.\n[10] Burkitt, A.N. (2006) Biol. Cybern. 95: 1-19.\n[11] Kitagawa, G. & Gersh, W. (1996) Smoothness priors analysis of time series. New York:\n\nSpringer-Verlag.\n\n[12] Pospischil, M., Toledo-Rodriguez, M., Monier, C., Piwkowska, Z., Bal, T., Fregnac, Y.,\n\nMarkram, H. & Destexhe, A. (2008) Biol. Cybern. 99: 427-441.\n\n[13] Dempster, A.P., Laird, N.M. & Rubin, D.B. (1977) J. R. Stat. Soc. 39: 1-38.\n[14] Smith, A.C. & Brown, E.N. (2003) Neural Comput. 15: 965-991.\n[15] Eden, U.T., Frank, L.M., Barbieri, R., Solo, V. & Brown, E.N., (2004) Neural Comput. 16:\n\n971-998.\n\n[16] Paninski, L., Ahmadian, Y., Ferreira, D.G., Koyama, S., Rad, K.R., Vidne, M., Vogelstein, J.\n\n& Wu, W. (2010) J. Comput. Neurosci. 29: 107-126.\n\n[17] Koyama, S., P\u00b4erez-Bolde, L.C., Shalizi, C.R. & Kass, R.E. (2010) J. Am. Stat. Assoc. 105:\n\n170-180.\n\n[18] Schneidman, E., Freedman, B. & Segev, I. (1998) Neural Comput. 10: 1679-1703.\n[19] Tsubo, Y., Takada, M., Reyes, A. D. & Fukai, T. (2007) Eur. J. Neurosci. 25: 3429-3441.\n[20] Kobayashi, R., Tsubo, Y. & Shinomoto, S. (2009) Front. Comput. Neurosci. 3: 9.\n[21] Lansky, P., Sanda, P. & He, J. (2006) J. Comput. Neurosci. 21: 211-223.\n[22] Kobayashi, R., Shinomoto, S. & Lansky, P. (2011) Neural Comput. 23: 3070-3093.\n[23] Huys, Q.J.M. & Paninski, L. (2009) PLoS Comput. Biol. 5: e1000379.\n\n9\n\n\f", "award": [], "sourceid": 181, "authors": [{"given_name": "Ryota", "family_name": "Kobayashi", "institution": null}, {"given_name": "Yasuhiro", "family_name": "Tsubo", "institution": null}, {"given_name": "Petr", "family_name": "Lansky", "institution": null}, {"given_name": "Shigeru", "family_name": "Shinomoto", "institution": null}]}