{"title": "Efficient estimation of hidden state dynamics from spike trains", "book": "Advances in Neural Information Processing Systems", "page_first": 227, "page_last": 234, "abstract": "", "full_text": "Ef\ufb01cient estimation of hidden state dynamics\n\nfrom spike trains\n\nM\u00b4arton G. Dan\u00b4oczy\n\nInst. for Theoretical Biology\nHumboldt University, Berlin\n\nInvalidenstr. 43\n\n10115 Berlin, Germany\n\nRichard H. R. Hahnloser\nInst. for Neuroinformatics\n\nUNIZH / ETHZ\n\nWinterthurerstrasse 190\n8057 Zurich, Switzerland\n\nm.danoczy@biologie.hu-berlin.de\n\nrich@ini.phys.ethz.ch\n\nAbstract\n\nNeurons can have rapidly changing spike train statistics dictated by the\nunderlying network excitability or behavioural state of an animal. To\nestimate the time course of such state dynamics from single- or multi-\nple neuron recordings, we have developed an algorithm that maximizes\nthe likelihood of observed spike trains by optimizing the state lifetimes\nand the state-conditional interspike-interval (ISI) distributions. Our non-\nparametric algorithm is free of time-binning and spike-counting prob-\nlems and has the computational complexity of a Mixed-state Markov\nModel operating on a state sequence of length equal to the total num-\nber of recorded spikes. As an example, we \ufb01t a two-state model to paired\nrecordings of premotor neurons in the sleeping songbird. We \ufb01nd that the\ntwo state-conditional ISI functions are highly similar to the ones mea-\nsured during waking and singing, respectively.\n\n1\n\nIntroduction\n\nIt is well known that neurons can suddenly change \ufb01ring statistics to re\ufb02ect a macroscopic\nchange of a nervous system. Often, \ufb01ring changes are not accompanied by an immediate\nbehavioural change, as is the case, for example, in paralysed patients, during sleep [1],\nduring covert discriminative processing [2], and for all in-vitro studies [3]. In all of these\ncases, changes in some hidden macroscopic state can only be detected by close inspection\nof single or multiple spike trains. Our goal is to develop a powerful, but computationally\nsimple tool for point processes such as spike trains. From spike train data, we want to the\nextract continuously evolving hidden variables, assuming a discrete set of possible states.\n\nOur model for classifying spikes into discrete hidden states is based on three assumptions:\n\n1. Hidden states form a continuous-time Markov process and thus have exponentially\n\ndistributed lifetimes\n\n2. State switching can occur only at the time of a spike (where there is observable\n\nevidence for a new state).\n\n3. In each of the hidden states, spike trains are generated by mutually independent\n\nrenewal processes.\n\n\f1. For a continuous-time Markov process, the probability of staying in state S = i for a\ntime interval T > t is given by Pi(t) = exp(\u2212rit), where ri is the escape rate (or hazard rate)\nof state i. The mean lifetime \u03c4i is de\ufb01ned as the inverse of the escape rate, \u03c4i = 1/ri.\n\nAs a corollary, it follows that the probability of staying in state i for a particular duration\nequals the probability of surviving for a fraction of that duration times the probability of\nsurviving for the remaining time, i.e., the state survival probability Pi(t) satis\ufb01es the product\nidentity\n\nPi(t1 + t2) = Pi(t1)Pi(t2).\n\n2. According to the second assumption, state switching can occur at any spike, irrespec-\ntive of which neuron \ufb01red the spike. In the following, we shall refer to a spike \ufb01red by any\nof the neurons as an event (where state switching might occur). Note that if two (or more)\nneurons happen to \ufb01re a spike at exactly the same time, the respective spikes are regarded\nas two (or more) distinct events. The collection of event times is denoted by te.\n\nCombining the \ufb01rst two assumptions, we formulate the hidden state sequence at the events\n(i.e. observation points) as a non-homogeneous discrete Markov chain. Accordingly, the\nprobability of remaining in state i for the duration of the interevent-interval (IEI) \u2206te =\nte \u2212 te\u22121 is given by the state survival probability Pi(\u2206te). The probability to change state is\nthen 1 \u2212 Pi(\u2206te).\n\n3.\nIn each state i, the spike trains are assumed to be generated by a renewal process that\nrandomly draws interspike-intervals (ISIs) t from a probability density function (pdf) hi(t).\nBecause every IEI is only a fraction of an ISI, instead of working with ISI distributions, we\nuse an equivalent formulation based on the conditional intensity function (CIF) \u03bbi(\u03d5) [4].\nThe CIF, also called hazard function in reliability theory, is a generalization of the Poisson\n\ufb01ring rate. It is de\ufb01ned as the probability density of spiking in the time interval [\u03d5,\u03d5+ dt],\ngiven that no spike has occurred in the interval [0,\u03d5) since the last spike. In the following,\nthe variable \u03d5, i.e. the time that has elapsed since the last spike, shall be referred to as phase\n[5]. Using the CIF, the ISI pdf can be expressed by the fundamental equation of renewal\ntheory,\n\nhi(t) = exp(cid:18)\u2212Z t\n\n0\n\n\u03bbi(\u03d5) d\u03d5(cid:19)\u03bbi(t).\n\n(1)\n\nAt each event e, we observe the phase trajectory of every neuron traced out since the last\nevent. It is clear that in multiple electrode recordings the phase trajectories between events\nare not independent, since they have to start where the previous trajectory ended. Therefore,\nour model violates the observation independence assumption of standard Hidden Markov\nModels (HMMs). Our model is, in formal terms, a mixed-state Markov model [6], with the\narchitecture of a double-chain [7]. Such models are generalizations of HMMs in that the\nobservable outputs may not only be dependent on the current hidden state, but also on past\nobservations (formally, the mixed state is formed by combining the hidden and observable\nstates).\n\nIn our model, hidden state transition probabilities are characterized by the escape rates ri\nand observable state transition probabilities by the CIFs \u03bbn\ni for neuron n in hidden state i.\nOur goal is to \ufb01nd a set \u03a8 of model parameters, such that the likelihood\n\nPr{O | \u03a8} = \u2211\nS\u2208S\n\nPr{S, O | \u03a8}\n\nof the observation sequence O is maximized.\n\nAs a \ufb01rst step, we will derive an expression for the combined likelihood Pr{S, O | \u03a8}. Then,\nwe will apply the expectation maximization (EM) algorithm to \ufb01nd the optimal parameter\nset.\n\n\f2 Transition probabilities\n\nThe mixed state at event e shall be composed of the hidden state Se and the observable\noutputs On\n\ne (for neurons n \u2208 {1, . . . , N}).\n\nHidden state transitions\nIn classical mixed-state Markov models, the hidden state tran-\nsition probabilities are constant. In our model, however, we describe time as a continuous\nquantity and observe the system whenever a spike occurs, thus in non-equidistant intervals.\nConsequently, hidden state transitions depend explicitly on the elapsed time since the last\nobservation, i.e., on the IEIs \u2206te. The transition probability ai j(\u2206te) from hidden state i to\nhidden state j is then given by\n\nai j(\u2206te) =(cid:26)exp(\u2212r j \u2206te)\n\n[1 \u2212 exp(\u2212r j \u2206te)] gi j\n\nif i = j,\notherwise,\n\n(2)\n\nwhere gi j is the conditional probability of making a transition from state i into a new state\nj, given that j 6= i. Thus, gi j has to satisfy the constraint \u2211 j gi j = 1, with gii = 0.\n\nObservable state transitions The observation at event e is de\ufb01ned as Oe = {\u03a6n\ne,\u03bde},\nwhere \u03bde contains the index of the neuron that has triggered event e by emitting a spike,\nand \u03a6n\ne] is the phase interval traced out by neuron n since its last spike.\nObservations form a cascade. After a spike, the phase of the respective neuron is immedi-\nately reset to zero. The interval\u2019s bounds are thus de\ufb01ned by\n\ne = (inf \u03a6n\n\ne, sup \u03a6n\n\nsup \u03a6n\n\ne = inf \u03a6n\n\ne + \u2206te\n\nand\n\ninf \u03a6n\n\ne =(cid:26)0\n\nsup \u03a6n\n\ne\u22121\n\nif \u03bde\u22121 = n,\notherwise.\n\nThe observable transition probability pi(Oe) = Pr{Oe | Oe\u22121, Se = i} is the probability of\nobserving output Oe, given the previous output Oe\u22121 and the current hidden state Se. With\nour independence assumption (3.), we can give its density as the product of every neuron\u2019s\nprobability of having survived the respective phase interval \u03a6n\ne that it has traced out since\nits last spike, multiplied by the spiking neuron\u2019s \ufb01ring rate (compare equation 1):\n\npi(Oe) =(cid:20)\u220f\n\nn\n\nexp(cid:18)\u2212Z\u03a6n\n\ne\n\n\u03bbn\n\ni (\u03d5) d\u03d5(cid:19)(cid:21) \u03bb\u03bde\n\ni (sup \u03a6\u03bde\n\ne ) .\n\nNote that in case of a single neuron recording, this reduces to the ISI pdf.\n\n(3)\n\nTo give a closed form of the observable transition pdf, several approaches are thinkable.\nHere, for the sake of \ufb02exibility and computational simplicity, we approximate the CIF \u03bbn\ni\nfor neuron n in state i by a step function, assuming that its value is constant inside small,\narbitrarily spaced bins Bn(b), b \u2208 {1, . . . , Nn\n\ni (b), \u2200 \u03d5 \u2208 Bn(b).\n\nbins}. That is, \u03bbn\n\ni (\u03d5) \u2248 `n\n\nIn order to use the discretized CIFs `n\ne (b) \u2208 [0, 1]\nrepresent how much of neuron n\u2019s phase bin Bn(b) has been traced out since the last event.\nFor example, if event e \u2212 1 happened in the middle of neuron n\u2019s phase bin 2 and event e\nhappened ten percent into its phase bin 4, then f n\ne (4) = 0.1,\nwhereas f n\n\ni (b), we also discretize \u03a6n\n\ne : the fractions f n\n\ne (3) = 1, and f n\n\ne (2) = 0.5, f n\n\ne (i) = 0 for other i, Figure 1.\n\nMaking use of these discretizations, the integral in equation 3 is approximated by a sum:\n\npi(Oe) \u2248\"\u220f\n\nn\n\nexp \u2212\n\nbins\n\nNn\n\u2211\nb=1\n\nf n\ne (b) `n\n\ni (b) kBn(b)k!# \u03bb\u03bde\n\ni (sup \u03a6\u03bde\n\ne ),\n\n(4)\n\nwith kBn(b)k denoting the width of neuron n\u2019s phase bin b.\n\nEquations 2 and 4 fully describe transitions in our mixed-state Markov model. Next, we\napply the EM algorithm to \ufb01nd optimal values of the escape rates ri, the conditional hidden\nstate transition probabilities gi j and the discretized CIFs `n\n\ni (b), given a set of spike trains.\n\n\fNeuron 1\n\nNeuron 2\n\nEvents\n\n1\n\n2\n\n3\n\n1\n\n2\n\n3\n\n4\n\n5\n\n1\n\n2\n\n0.5\n\n1\n\n2\n\nf 2\ne\n1.0\n\n3\n\n0.1\n\n4\n\n1\n\n2\n\n1\n\nte\u22121\n\nte\n\nFigure 1: Two spike trains are combined to form the event train shown in the bottom row.\nThe phase bins are shown below the spike trains, they are labelled with the corresponding\nbin number. As an example, for the second neuron, the fractions f 2\ne (b) of its phase bins\nthat have been traced out since event e \u2212 1 are indicated by the horizontal arrow. They are\nnonzero for b = 2, 3, and 4.\n\n3 Parameter estimation\n\nOur goal is to \ufb01nd model parameters \u03a8 = {ri, gi j, `n\ni (b)}, such that the likelihood Pr{O | \u03a8}\nof observation sequence O is maximized. According to the EM algorithm, we can \ufb01nd such\nvalues by iterating over models\n\n\u03a8new = arg max\n\n\u03c8 \u2211\n\nS\u2208S\n\nPr{S | O, \u03a8old} ln(Pr{S, O |\u03c8}) ,\n\n(5)\n\nwhere S is the set of all possible hidden state sequences. The product of equations 2 and 4\nover all events is proportional to the combined likelihood Pr{S, O |\u03c8}:\n\nPr{S, O |\u03c8} \u223c \u220f\n\ne\n\naSe\u22121 Se(\u2206te) pSe(Oe).\n\nBecause of the logarithm in equation 5, the maximization over escape rates can be separated\nfrom the maximization over conditional intensity functions. We de\ufb01ne the abbreviations\n\u03bei j(e) = Pr{Se\u22121 = i, Se = j | O, \u03a8old} and \u03b3i(e) = Pr{Se = i | O, \u03a8old} for the posterior\nprobabilities appearing in equation 5. In practice, both expressions are computed in the\nexpectation step by the classic forward-backward algorithm [8], using equations 2 and 4 as\nthe transition probabilities. With the abbreviations de\ufb01ned above, equation 5 is split to\n\nrnew\nj = arg max\n\n\u03bej j(e) (\u2212r\u2206te) + \u2211\ne, i6= j\n\ne\n\nr \u2211\n` \uf8eb\n\uf8ec\uf8ed\ng (cid:18)ln g\u2211\n\n\u2212`\u2211\n\ne\n\ne\n\n\u03bei j(e) ln[1 \u2212 exp(\u2212r\u2206te)]!\n\u03b3i(e)\uf8f6\n\uf8f7\uf8f8\n\ne \u2208Bn(b)\n\ne:\u03bde=n \u2227\n\n(6)\n\n(7)\n\n(8)\n\ni (b)new = arg max\n`n\n\n\u03b3i(e) f n\n\ne (b) kBn(b)k + ln ` \u2211\nsup \u03a6n\n\ngnew\ni j = arg max\n\n\u03bei j(e)(cid:19) with gnew\n\nii = 0 and \u2211\n\ngnew\ni j = 1.\n\nj\n\nIn order to perform the maximization in equation 6, we compute its derivative with respect\nto r and set it to zero:\n\n0 = \u2211\n\ne\n\n\u03bej j(e)\u2206te +\uf8eb\n\uf8ed\n\n\u2206te \u2212\n\n\u2206te\n\n1 \u2212 exp(cid:16)\u2212rnew\n\nj \u2206te(cid:17)\n\n\u03bei j(e)\n\n\uf8f6\n\uf8f8 \u2211\n\ni6= j\n\n\fThis equation cannot be solved analytically, but being just a one dimensional optimiza-\ntion problem, a solution can be found using numerical methods, such as the Levenberg-\nMarquardt algorithm. The singularity in case of \u2206te = 0, which arises when two or more\nspikes occur at the same time, needs the special treatment of replacing the respective frac-\ntion by its limit: 1/rnew\n\n.\n\ni\n\nTo obtain the reestimation formula for the discretized CIFs, equation 7\u2019s derivative with\nrespect to ` is set to zero. The result can be solved directly and yields\n\ni (b)new = \u2211\n`n\nsup \u03a6n\n\ne:\u03bde=n \u2227\n\ne \u2208Bn(b)\n\n\u03b3i(e).\u2211\n\ne\n\n\u03b3i(e) f n\n\ne(b) kBn(b)k.\n\nFinally, to obtain the reestimation formula for the conditional hidden state transition prob-\nabilities gi j, we solve equation 8 using Lagrange multipliers, resulting in\n\ngnew\ni6= j = \u2211\n\ne\n\n\u03bei j(e). \u2211\n\ne, k6=i\n\n\u03beik(e).\n\n4 Application to spike trains from the sleeping songbird\n\nWe have applied our model to spike train data from sleeping songbirds [9]. It has been\nfound that during sleep, neurons in vocal premotor area RA exhibit spontaneous activity\nthat at times resembles premotor activity during singing [10, 9].\n\nWe train our model on the spike train of a single RA neuron in the sleeping bird with\nNbins = 100, where the \ufb01rst bin extends from the sample time to 1ms and the consecutive\n99 steps are logarithmically spaced up to the largest ISI. After convergence, we \ufb01nd that\nthe ISI pdfs associated with the two hidden states qualitatively agree with the pdfs recorded\nin the awake non-singing bird and the awake singing bird, respectively, Figure 2. ISI pdfs\nwere derived from the CIFs by using equation 1. For the state-conditional ISI histograms\nwe \ufb01rst ran the Viterbi algorithm to \ufb01nd the most likely hidden-state sequence and then\nsorted spikes into two groups, for which the ISIs histograms were computed.\n\nWe \ufb01nd that sleep-related activity in the RA neuron of Figure 2 is best described by random\nswitching between a singing-like state of lifetime \u03c41 = 1.18s \u00b1 0.38s and an awake, non-\nsinging-like state of lifetime \u03c42 = 2.26s \u00b1 0.42s. Standard deviations of lifetime estimates\nwere computed by dividing the spike train into 30 data windows of 10s duration each\nand computing the Jackknife variance [11] on the truncated spike trains. The difference\nbetween the singing-like state in our model and the true singing ISI pdf shown in Figure 2\nis more likely due to generally reduced burst rates during sleep, rather than to a particularity\nof the examined neuron.\n\nNext we applied our model to simultaneous recordings from pairs of RA neurons. By \ufb01tting\ntwo separate models (with identical phase binning) to the two spike trains, and after running\nthe Viterbi algorithm to \ufb01nd the most likely hidden state sequences, we \ufb01nd good agreement\nbetween the two sequences, Figure 3 (top row) and 4c. The correspondence of hidden state\nsequences suggests a common network mechanism for the generation of the singing-like\nstates in both neurons. We thus applied a single model to both spike trains and found\nagain good agreement with hidden-state sequences determined for the separate models,\nFigure 3 (bottom row) and 4f. The lifetime histograms for both states look approximatively\nexponential, justifying our assumption for the state dynamics, Figure 4g and h.\n\nFor the model trained on neuron one we \ufb01nd lifetimes \u03c41 = 0.63s \u00b1 0.37s and \u03c42 =\n1.71s \u00b1 0.45s, and for the model trained on neuron two we \ufb01nd \u03c41 = 0.42s \u00b1 0.11s\nand \u03c42 = 1.23s \u00b1 0.17s. For the combined model, lifetimes are \u03c41 = 0.58s \u00b1 0.25s and\n\n\f(a)\n\n20\n\n15\n\n10\n\n5\n\n]\n\n%\n\n[\n\n.\n\nb\no\nr\np\n\n(b)\n\n20\n\n15\n\n10\n\n5\n\n]\n\n%\n\n[\n\n.\n\nb\no\nr\np\n\n(c)\n\n20\n\n15\n\n10\n\n5\n\n]\n\n%\n\n[\n\n.\n\nb\no\nr\np\n\n0\n100\n\n101\n\n102\nISI [ms]\n\n103\n\n0\n100\n\n101\n\n102\nISI [ms]\n\n103\n\n0\n100\n\n101\n\n102\nISI [ms]\n\n103\n\nFigure 2: (a): The two state-conditional ISI histograms of an RA neuron during sleep are\nshown by the red and green curves, respectively. Gray patches represent Jackknife standard\ndeviations. (b): After waking up the bird by pinching his tail, the new ISI histogram shown\nby the gray area becomes almost indistinguishable from the ISI histogram of state 1 (green\nline). (c): In comparison to the average ISI histogram of many RA neurons during singing\n(shown by the gray area, reproduced from [12]), the ISI histogram corresponding to state 2\n(red line) is shifted to the right, but looks otherwise qualitatively similar.\n\n\u03c42 = 1.13s \u00b1 0.15s. Thus, hidden-state switching seems to occur more frequently in the\ncombined model. The reason for this increase might be that evidence for the song-like\nstate appears more frequently with two neurons, as a single neuron might not be able to\nindicate song-like \ufb01ring statistics with high temporal \ufb01delity.\n\nWe have also analysed the correlations between state dynamics in the different models. The\nhidden state function S(t) is a binary function that equals one when in hidden state 1 and\nzero when in state 2. For the case where we modelled the two spike trains separately, we\nhave two such hidden state functions, S1(t) for neuron one and S2(t) for neuron two. We\n\ufb01nd that all correlation functions CSS1(t), CSS2(t), and CS1S2(t), have a peak at zero time\nlag, with a high peak correlation of about 0.7, Figure 4c and f (the correlation function is\nde\ufb01ned as the cross-covariance function divided by the autocovariance functions).\n\nWe tested whether our model is a good generative model for the observed spike trains by\napplying the time rescaling theorem, after which the ISIs of a good generative model with\nknown CIFs should reduce to a Poisson process with unit rate, which, after another trans-\nformation, should lead to a uniform probability density in the interval (0, 1) [4]. Performing\nthis test, we found that the transformed ISI densities of the combined model are uniform,\nthus validating our model (95% Kolmogorov-Smirnov test, Figure 4i).\n\n5 Discussion\n\nWe have presented a mixed-state Markov model for point processes, assuming generation\nby random switching between renewal processes. Our algorithm is suited for systems in\nwhich neurons make discrete state transitions simultaneously. Previous attempts of \ufb01tting\nspike train data with Markov models exhibited weaknesses due to time binning. With large\ntime bins and the number of spikes per bin treated as observables [13, 14], state transitions\ncan only be detected when they are accompanied by \ufb01ring rate changes. In our case, RA\nneurons have a roughly constant \ufb01ring rate throughout the entire recording, and so such\napproaches fail.\n\nWe were able to model the hidden states in continuous time, but had to bin the ISIs in\norder to deal with limited data.\nIn principle, the algorithm can operate on any binning\nscheme for the ISIs. Our choice of logarithmic bins keeps the number of parameters small\n(proportional to Nbins), but preserves a constant temporal resolution.\n\nThe hidden-state dynamics form Poisson processes characterized by a lifetime. By esti-\n\n\f]\nz\nH\n\n[\n\nR\nF\nI\n\n]\nz\nH\n\n[\n\nR\nF\nI\n\n \n103\n \n102\n \n \n101\n \n100\n \n \n102\n \n \n101\n \n100\n \n0\n \n103\n \n102\n \n \n101\n \n100\n \n \n102\n \n \n101\n \n100\n \n0\n\n5\n\n5\n\n10\n\n15\n\n20\n\n25\n\n30\n\n10\n\n15\n\nTime [sec]\n\n20\n\n25\n\n30\n\nFigure 3: Shown are the instantaneous \ufb01ring rate (IFR) functions of two simultaneously\nrecorded RA neurons (at any time, the IFR corresponds to the inverse of the current ISI).\nThe green areas show the times when in the \ufb01rst (awake-like) hidden state, and the red\nareas when in the song-like hidden state. The top two rows show the result of computing\ntwo independent models on the two neurons, whereas the bottom rows show the result of a\nsingle model.\n\n(a)\n\n15\n\n10\n\n]\n\n%\n\n[\n\n.\n\nb\no\nr\np\n\n5\n\n0\n100\n\n101\n\n102\nISI [ms]\n\n103\n\n(b)\n\n15\n\n10\n\n]\n\n%\n\n[\n\n.\n\nb\no\nr\np\n\n5\n\n0\n100\n\n101\n\n102\nISI [ms]\n\n103\n\n0\n0\n1\n/\ns\ne\nk\ni\np\ns\n\n#\n\n(d)\n\n3\n\n2\n\n1\n\n0\n100\n\n101\n\n102\nISI [ms]\n\n103\n\n0\n0\n1\n/\ns\ne\nk\ni\np\ns\n\n#\n\n(e)\n\n3\n\n2\n\n1\n\n0\n100\n\n101\n\n102\nISI [ms]\n\n103\n\n(g)\n\n30\n\n20\n\n10\n\ns\ne\nt\na\nt\ns\n#\n\n(h)\n\n30\n\n20\n\n10\n\ns\ne\nt\na\nt\ns\n#\n\n0\n\n101\nState duration [ms]\n\n102\n\n103\n\n104\n\n0\n\n101\nState duration [ms]\n\n102\n\n103\n\n104\n\n.8\n\n.4\n\n0\n\nn\no\ni\nt\na\nl\ne\nr\nr\no\nc\n\n.8\n\n.4\n\n0\n\nn\no\ni\nt\na\nl\ne\nr\nr\no\nc\n\n.02\n\n.\nt\ns\ni\nd\n\n.\nf\ni\nn\nu\n\no\nt\n\n.\nf\nf\ni\nd\n\n(c)\n\n\u221210\n\n(f)\n\n\u221210\n\n(i)\n\n\u22120.2\n\n0\n\n0.2\n\n\u22125\n\n0\n\n5 10\n\n\u2206t [sec]\n\n\u22120.2\n\n0\n\n0.2\n\n\u22125\n\n0\n\n5 10\n\n\u2206t [sec]\n\n0\n\n0\n\n1/3\n2/3\nOur model\n\n1\n\nFigure 4: (a) and (b): State-conditional ISI pdfs for each of the two neurons. (d) and (e): ISI\nhistograms (blue and yellow) for neurons 1 and 2, respectively, as well as state-conditional\n(g) and (h): State lifetime\nISI histograms (red and green), computed as in Figure 2a.\nhistograms for the song-like state (red) and for the awake-like state (green). Theoretical\n(exponential) histograms with escape rates r1 and r2 (\ufb01ne black lines) show good agreement\nwith the measured histograms, especially in F. (c): Correlation between state functions of\n(f): Correlation between the state functions of the combined\nthe two separate models.\n(i): Kolmogorov-\nmodel with separate model 1 (blue) and separate model 2 (yellow).\nSmirnov plot after time rescaling. After transforming the ISIs, the resulting densities for\nboth neurons remain within the 95% con\ufb01dence bounds of the uniform density (gray area).\nIn (a)\u2013(c) and (f)\u2013(h), Jackknife standard deviations are shown by the gray areas.\n\n\fmating this lifetime, we hope it might be possible to form a link between the hidden states\nand the underlying physical process that governs the dynamics of switching. Despite the\napparent limitation of Poisson statistics, it is a simple matter to generalize our model to\nhidden state distributions with long tails (e.g., power-law lifetime distributions): By cas-\ncading many hidden states into a chain (with \ufb01xed CIFs), a power-law distribution can be\napproximated by the combination of multiple exponentials with different lifetimes. Our\ncode is available at http://www.ini.unizh.ch/\u223crich/software/.\n\nAcknowledgements\n\nWe would like to thank Sam Roweis for advice on Hidden Markov models and Maria\nMinkoff for help with the manuscript. R. H. is supported by the Swiss National Science\nFoundation. M. D. is supported by Stiftung der Deutschen Wirtschaft.\n\nReferences\n\n[1] Z. N\u00b4adasdy, H. Hirase, A. Czurk\u00b4o, J. Csicsv\u00b4ari, and G. Buzs\u00b4aki. Replay and time compression\n\nof recurring spike sequences in the hippocampus. J Neurosci, 19(21):9497\u20139507, Nov 1999.\n\n[2] K. G. Thompson, D. P. Hanes, N. P. Bichot, and J. D. Schall. Perceptual and motor processing\nstages identi\ufb01ed in the activity of macaque frontal eye \ufb01eld neurons during visual search. J\nNeurophysiol, 76(6):4040\u20134055, Dec 1996.\n\n[3] R. Cossart, D. Aronov, and R. Yuste. Attractor dynamics of network UP states in the neocortex.\n\nNature, 423(6937):283\u2013288, May 2003.\n\n[4] E. N. Brown, R. Barbieri, V. Ventura, R. E. Kass, and L. M. Frank. The time-rescaling theorem\n\nand its application to neural spike train data analysis. Neur Comp, 14(2):325\u2013346, Feb 2002.\n\n[5] J. Deppisch, K. Pawelzik, and T. Geisel. Uncovering the synchronization dynamics from corre-\n\nlated neuronal activity quanti\ufb01es assembly formation. Biol Cybern, 71(5):387\u2013399, 1994.\n\n[6] A. M. Fraser and A. Dimitriadis. Forecasting probability densities by using hidden Markov\nmodels with mixed states. In Weigend and Gershenfeld, editors, Time Series Prediction: Fore-\ncasting the Future and Understanding the Past, pages 265\u201382. Addison-Wesley, 1994.\n\n[7] A. Berchtold. The double chain Markov model. Comm Stat Theor Meths, 28:2569\u20132589, 1999.\n\n[8] L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recog-\n\nnition. Proc IEEE, 77(2):257\u2013286, Feb 1989.\n\n[9] R. H. R. Hahnloser, A. A. Kozhevnikov, and M. S. Fee. An ultra-sparse code underlies the\n\ngeneration of neural sequences in a songbird. Nature, 419(6902):65\u201370, Sep 2002.\n\n[10] A. S. Dave and D. Margoliash. Song replay during sleep and computational rules for sensori-\n\nmotor vocal learning. Science, 290(5492):812\u2013816, Oct 2000.\n\n[11] D. J. Thomson and A. D. Chave. Jackknifed error estimates for spectra, coherences, and transfer\nIn Simon Haykin, editor, Advances in Spectrum Analysis and Array Processing,\n\nfunctions.\nvolume 1, chapter 2, pages 58\u2013113. Prentice Hall, 1991.\n\n[12] A. Leonardo and M. S. Fee. Ensemble coding of vocal control in birdsong.\n\nJ Neurosci,\n\n25(3):652\u2013661, Jan 2005.\n\n[13] G. Radons, J. D. Becker, B. D\u00a8ulfer, and J. Kr\u00a8uger. Analysis, classi\ufb01cation, and coding of\n\nmultielectrode spike trains with hidden Markov models. Biol Cybern, 71(4):359\u2013373, 1994.\n\n[14] I. Gat, N. Tishby, and M. Abeles. Hidden Markov modelling of simultaneously recorded cells\nin the associative cortex of behaving monkeys. Network: Computation in Neural Systems,\n8(3):297\u2013322, 1997.\n\n\f", "award": [], "sourceid": 2886, "authors": [{"given_name": "Marton", "family_name": "Danoczy", "institution": null}, {"given_name": "Richard", "family_name": "Hahnloser", "institution": null}]}