{"title": "Approximate Bayesian Inference for a Mechanistic Model of Vesicle Release at a Ribbon Synapse", "book": "Advances in Neural Information Processing Systems", "page_first": 7070, "page_last": 7080, "abstract": "The inherent noise of neural systems makes it difficult to construct models which accurately capture experimental measurements of their activity. While much research has been done on how to efficiently model neural activity with descriptive models such as linear-nonlinear-models (LN), Bayesian inference for mechanistic models has received considerably less attention. One reason for this is that these models typically lead to intractable likelihoods and thus make parameter inference difficult. Here, we develop an approximate Bayesian inference scheme for a fully stochastic, biophysically inspired model of glutamate release at the ribbon synapse, a highly specialized synapse found in different sensory systems. The model translates known structural features of the ribbon synapse into a set of stochastically coupled equations. We approximate the posterior distributions by updating a parametric prior distribution via Bayesian updating rules and show that model parameters can be efficiently estimated for synthetic and experimental data from in vivo two-photon experiments in the zebrafish retina. Also, we find that the model captures complex properties of the synaptic release such as the temporal precision and outperforms a standard GLM. Our framework provides a viable path forward for linking mechanistic models of neural activity to measured data.", "full_text": "Approximate Bayesian Inference for a Mechanistic\n\nModel of Vesicle Release at a Ribbon Synapse\n\nCornelius Schr\u00f6der\u2217\n\nInstitute for Ophthalmic Research\n\nUniversity of T\u00fcbingen\n\ncornelius.schroeder@uni-tuebingen.de\n\nBen James\u2217\n\nSchool of Life Sciences\n\nUniversity of Sussex\nbmjame02@gmail.com\n\nLeon Lagnado\n\nSchool of Life Sciences\n\nUniversity of Sussex\n\nl.lagnado@sussex.ac.uk\n\nPhilipp Berens\n\nInstitute for Ophthalmic Research\n\nUniversity of T\u00fcbingen\n\nphilipp.berens@uni-tuebingen.de\n\nAbstract\n\nThe inherent noise of neural systems makes it dif\ufb01cult to construct models which\naccurately capture experimental measurements of their activity. While much\nresearch has been done on how to ef\ufb01ciently model neural activity with descriptive\nmodels such as linear-nonlinear-models (LN), Bayesian inference for mechanistic\nmodels has received considerably less attention. One reason for this is that these\nmodels typically lead to intractable likelihoods and thus make parameter inference\ndif\ufb01cult. Here, we develop an approximate Bayesian inference scheme for a\nfully stochastic, biophysically inspired model of glutamate release at the ribbon\nsynapse, a highly specialized synapse found in different sensory systems. The\nmodel translates known structural features of the ribbon synapse into a set of\nstochastically coupled equations. We approximate the posterior distributions by\nupdating a parametric prior distribution via Bayesian updating rules and show that\nmodel parameters can be ef\ufb01ciently estimated for synthetic and experimental data\nfrom in vivo two-photon experiments in the zebra\ufb01sh retina. Also, we \ufb01nd that the\nmodel captures complex properties of the synaptic release such as the temporal\nprecision and outperforms a standard GLM. Our framework provides a viable path\nforward for linking mechanistic models of neural activity to measured data.\n\n1\n\nIntroduction\n\nThe activity of sensory neurons is noisy \u2014 a central goal of systems neuroscience has therefore been\nto devise probabilistic models that allow to model the stimulus-response relationship of such neurons\nwhile capturing their variability [1]. Speci\ufb01cally, linear-nonlinear (LN) models and their generaliza-\ntions have been used extensively to describe neural activity in the retina [2, 3]. However, these type\nof models cannot yield insights into the mechanistic foundations of the neural computations they aim\nto describe, as they do not model their biophysical basis. On the other hand, mechanistic models on\nthe cellular or subcellular level have been rarely used to model stimulus-response relationships: they\nrequire highly specialized experiments to estimate individual parameters [4, 5], making it dif\ufb01cult to\nemploy them directly in a stimulus-response model; alternatively, they often result in an intractable\nlikelihood, making parameter inference challenging [6].\n\n\u2217Equal contribution. Code available at https://github.com/berenslab/abc-ribbon\n\n33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.\n\n\fFigure 1: Overview of the model. A. After a linear-non-linear processing stage, the signal is passed\nto a biophysically inspired model of a ribbon synapse in which vesicles are released in discrete\nevents. B. Sketch of a bipolar cell with attached photoreceptors (left) and a high resolution electron\nmicroscopy (EM) image of a ribbon synapse with its vesicle pools. The readily releasable pool is\nhighlighted in red, the reserve pool is shown in white (EM image adapted from [14]).\n\nHere we make use of recent advances in approximate Bayesian computation (ABC) [6, 7, 8, 9, 10, 11]\nto \ufb01t a fully stochastic, biophysically inspired model of vesicle release from the bipolar cell (BC)\naxon terminal to functional two-photon imaging data from the zebra\ufb01sh retina (Fig. 1). It includes a\nlinear-nonlinear stage to model the stimulus dependency, and a set of stochastically coupled equations\nmodeling biophysical properties of the BC synapse. At this so-called \u201cribbon synapse\u201d, a specialized\nprotein complex, the \u201cribbon\u201d, acts as a conveyor belt that \u201ctethers\u201d and \u201cloads\u201d vesicles onto active\nzones for future release [12, 13]. It organizes vesicles into multiple pools: the \u201cdocked\u201d (or readily\nreleasable) pool consists of a number of vesicles located directly above the plasma membrane, while\nthe \u201cribboned\u201d pool consists of vesicles attached to the ribbon further from the cell membrane. The\ndocked vesicles are thus primed for immediate release and can be released simultaneously (so called\nmultivesicular release, MVR). The ribboned vesicles are held in reserve to re\ufb01ll the docked pool as it\nis depleted by exocytosis [14, 15]. The transitions of vesicles between those pools can be modeled by\na set of coupled differential equations [16, 4], which we extend to a stochastic treatment. In addition\nto photoreceptors and bipolar cells in the retina [17], ribbon synapses are featured in many other\nsensory systems, such as in auditory hair cells and the vestibular system [18].\nThus, our proposed Bayesian framework links stimulus-response modeling to a biophysically inspired,\nmechanistic model of the ribbon synapse. This may contribute to a better understanding of sensory\ncomputations across levels of description, with applications in a diverse range of sensory systems.\n\n2 Previous work\n\nModels of neural activity Variants and extensions of LN models have been widely used to model\nthe activity of retinal neurons [2, 19, 1, 3]. In these descriptive models, the excitatory drive to\na cell is modeled as the convolution of a receptive \ufb01eld kernel with the stimulus, followed by a\nstatic nonlinearity. The result of this computation sets the rate of a stochastic spike generator, most\ncommonly using either a Binomial or Poisson distribution. These basic models have also been used to\napproximate BC activity [20], however they do not explicitly model the dynamics of vesicle release at\nthe ribbon synapse. Existing mechanistic models of synaptic release often require highly specialized\nexperiments to estimate parameters [21] or make only indirect inferences based on the spiking activity\nof post-synaptic cells [22, 23]. In addition, they have not been used to perform system identi\ufb01cation.\nThe linear-nonlinear kinetics (LNK) model [24] attempts to address this issue. After an initial LN\nstage, the LNK model passes this information into a \u201ckinetics block\u201d consisting of a \ufb01rst-order set\nof kinetic equations implicitly representing the availability of vesicles. However, the LNK model\ntreats the states of the pools as rescaled Markov process and cannot easily account for discrete vesicle\nrelease or MVR at the given noise level of single synapses.\n\n2\n\n\fTable 1: Variables, parameters and distributions of the model.\n\nVariable Description\n\nParameter Movement\ndistribution\n\ntime stretch of the kernel\nnon-linearity\ncorrelation of exocytosed vesicles\nexocytosed vesicles\nvesicles at dock\nvesicles ribbon \u2192 dock\nvesicles on ribbon\nvesicles cytoplasm \u2192 ribbon\n\n\u03b3\n\nk, h\n\n\u03c1\n\npdt\n\npr\n\n\u03bbc\n\ndt\nD\n\nr\n\nR\n\nc\n\nDistribution\nN (\u00b5, \u03c32)\nN (\u00b5, \u03a3)\nN (\u00b5, \u03c32)\n\nBeta-Bin\n\nres. Binomial N (\u00b5, \u03c32)\n\nres. Poisson\n\n\u0393\n\nWe address these issues by proposing a model that combines LN modeling for system identi\ufb01cation\nwith a probabilistic, biophysically inspired model for the ribbon synapse, with the capability to model\ndiscrete, multi-vesicular release. In contrast to classical LN models, the parameters of this model are\nreadily interpretable as they directly refer to biological processes.\n\nApproximate Bayesian Computation Many mechanistic models in computational neuroscience\nonly provide means to simulate data and do not yield an explicit likelihood function. Therefore,\ntheir parameters cannot be inferred easily. In such simulator-based models, Bayesian inference can\nbe performed through techniques known as Approximate Bayesian Computation or likelihood-free\ninference [8]. The general inference problem can be de\ufb01ned as follows: given some experimental\ndata x0 and a mechanistic, simulator-based model p(x|\u03b8) parametrized by \u03b8, we want to approximate\nthe posterior distribution p(\u03b8|x = x0). The simulator model allows us to generate samples xi given\nany parameter \u03b8, but the likelihood function cannot be evaluated. Often, xi is \ufb01rst mapped to a\nlow dimensional space (so called \u201csummary statistics\u201d), in which a loss function is computed. This\nmapping de\ufb01nes the features the model is trying to capture [10].\nThere are two main approaches to solve the inference problem: (1) approximate the likelihood\np(x0|\u03b8) and then sample (e.g. via MCMC) to get the posterior [8, 10]. In this approach, guided\nsampling is often used to generate new samples and either train a neural network or update other\nparametric models for the likelihood [8, 9]. One disadvantage of this approach is that there is a second\nsampling step necessary to obtain the posterior, which can be as time consuming as the inference\nof the likelihood. (2) approximate the posterior p(\u03b8|x = x0). In principle, inference via rejection\nsampling could be applied, but is often inef\ufb01cient. Thus, recently proposed methods use parametric\nmodels (like a mixture of Gaussian) to approximate the posterior over several sampling rounds [6].\nIn our work, we use an ABC method of type (2) with parametric prior distributions and Bayesian\nupdating rules to approximate the posterior distribution p(\u03b8|x = x0). We show that it ef\ufb01ciently\nlearns the parameters of the proposed release model.\n\n3 Linear Non-Linear Release Model\n\nOur model consists of two main parts (Fig. 1): a linear-nonlinear (LN) stage models the excitatory\ndrive to the BC and a release (R) stage, models the vesicle pools as dependent random variables (see\nAppendix A for pseudocode). Therefore, we refer to the model as LNR-model.\n\n3.1 Linear-Nonlinear stage\n\nThe \ufb01rst stage of the LNR model is a standard LN model, in which a light stimulus l(t) is convolved\nwith a receptive \ufb01eld w\u03b3 to yield the surrogate calcium concentration ca(t) in the synaptic terminal\nwhich is then passed through a static nonlinearity:\n\nca(t) =\n\nl(t \u2212 \u03c4 )w\u03b3(\u03c4 )d\u03c4 .\n\n(cid:90) T\n\n\u03c4 =0\n\n3\n\n\fWe assume w\u03b3 to be a biphasic kernel in order to model the signal processing performed in the\nphotoreceptor and the BC [16, 25] (Figure 1A, B). A single parameter \u03b3 was used to stretch/compress\nthe kernel on the time axis to estimate the receptive \ufb01eld (see Appendix C). An approach to allow for\nmore \ufb02exibility (e.g. using basis function [2]) could in principle be used as well. However, this would\nlead to a higher dimensional parameter space, making inference less ef\ufb01cient. We used a sigmoidal\nnon-linearity to convert the calcium signal to the release probability:\n\npdt(t) = 1/(cid:0)1 + exp(\u2212k(cat \u2212 h))(cid:1),\n\n(1)\n\nwhere the parameters for the slope k and the half activation h are inferred from the data. We add a\nsmall positive offset to the non-linearity and renormalized it to allow for spontaneous release.\n\n3.2 Release stage\n\nThe second stage of the LNR model consists of a model for the synaptic dynamics based on the\nstructure of the BC ribbon: we de\ufb01ne variables representing the number of vesicles present in each\npool of the ribbon and the number of vesicles moving between pools per timestep (see Table 1). We\nuse capital letters to de\ufb01ne the number of vesicles in a speci\ufb01c pool, and lowercase letters to indicate\nthe moving vesicles. At each time step, vesicles are \ufb01rst released from the dock, then new vesicles\nare moved from the ribbon to the dock, and \ufb01nally the ribbon is re\ufb01lled from the cytoplasm. For\nsimplicity, we assume that only the vesicle release probability is modulated by the excitatory drive;\nthe docking probabilities and rates of movement to the ribbon are constant over time.\n\nVesicle Release To model the correlated release of docked vesicles, we use a beta binomial distri-\nbution. This is a binomial distribution for which the parameter p is itself a random variable, leading\nto correlated events [26]. The release probability pdt is assumed to be the output of the LN stage\naccording to equation 1. To achieve a correlation \u03c1 for the released vesicles and a release probability\nof pdt the parameters for the beta binomial distribution are:\n\n(cid:19)\n\n(cid:18) 1\n\n\u03c1\n\n(cid:19)\n\n(cid:18) 1\n\npdt\n\n\u03b1t = pdt \u00b7\n\n\u2212 1\n\nand \u03b2t = \u03b1t \u00b7\n\n\u2212 1\n\nif pdt (cid:54)= 0.\n\n,\n\nThus, in each time step, we \ufb01rst draw the parameter \u02dcpt for the binomial distribution according to\na beta distribution: \u02dcpt \u223c Beta(\u03b1t, \u03b2t) and then sample the number of released vesicles dt from a\nbinomial distribution with parameters n = Dt\u22121 (the numbers of vesicles at the dock) and \u02dcpt:\n\n(cid:40)\n(cid:0)Dt\u22121\n\n0\n\ndt\n\n(cid:1)\u02dcp dt\n\np(dt|Dt\u22121) =\n\nt (1 \u2212 \u02dcpt)(Dt\u22121\u2212dt)\n\nif pdt = 0,\notherwise.\n\nMovement to the dock We assume that rt vesicles located at the ribbon move to the dock in each\ntime step. Because there is a maximum number of vesicles Dmax that can be docked, such that\nrt + Dt\u22121 \u2264 Dmax, we use a restricted binomial distribution to model stochastic vesicle docking:\n\n(cid:1)prt\n\n\uf8f1\uf8f4\uf8f2\uf8f4\uf8f3\n(cid:0)Rt\u22121\n(cid:80)\n\n0\n\nr (1 \u2212 pr)(Rt\u22121\u2212rt)\n\n(cid:0)Rt\u22121\n\n(cid:1)prt\n\nrt\n\nrt\nrt\u2265Dmax\u2212Dt\n\np(rt|Rt\u22121, Dt) =\n\nr (1 \u2212 pr)(Rt\u22121\u2212rt)\n\nif rt < Dmax \u2212 Dt\nif rt = Dmax \u2212 Dt\notherwise.\n\nThe \ufb01rst case is the standard binomial distribution with appropriate parameters, the second case\nmodels the assumption that moving more vesicles to the dock than its maximum capacity simply \ufb01lls\nthe dock and assures that the probabilities over all possible events sum up to one.\n\nMovement to the ribbon We assume a large number of vesicles available in the cytoplasm (which\nis not explicitly modeled), such that the number of vesicles ct moving from the cytoplasm to the\nribbon follows a Poisson distribution, again respecting the maximal ribbon capacity Rmax:\n\n\uf8f1\uf8f4\uf8f2\uf8f4\uf8f3e\u2212\u03bb \u03bbct\n(cid:80)\n\nct!\n\n0\n\np(ct|Rt) =\n\nct\u2265Rmax\u2212Rt\n\nif ct < Rmax \u2212 Rt\nif ct = Rmax \u2212 Rt\notherwise.\n\ne\u2212\u03bb \u03bbct\n\nct!\n\n4\n\n\fFigure 2: Overview of the inference method. In each round samples are drawn from the (proposal)\nprior (blue), the model is evaluated and the response is mapped to its summary statistic. From this,\nthe loss per parameter \u03b8 is calculated, the best samples are accepted and used to update the (proposal)\nprior via Bayesian updating rules, yielding a posterior (red), which is the proposal prior for the next\nround.\n\n4 Bayesian Inference of Model Parameters\n\nIn the previous section, we constructed a fully stochastic model of vesicle release from BCs, including\nan explicit mechanistic model of the ribbon synapse, re\ufb02ecting the underlying biological structures.\nThe maximal capacity of the dock Dmax was set based on the measured data to the largest quantal\nevent observed in the functional recording (Dmax \u2248 7 \u2212 8). Rmax, the maximal capacity of the\nribbon, was set to an estimate on the maximal number of vesicles at the ribbon in gold\ufb01sh rod\nbipolar cells [27, 28], but decreased to re\ufb02ect the smaller size of cone BCs in zebra\ufb01sh larva [29]\n(Rmax \u2248 50).\nNext, we developed an ABC framework for likelihood free inference to infer the remaining model\nparameters (Table 1) from functional two-photon recordings. Our method uses parametric prior\ndistributions which are updated in multiple rounds via Bayesian updating rules to provide a unimodal\napproximation to the posterior (Figure 2). Brie\ufb02y, in each round we \ufb01rst draw a parameter vector\n\u03b8 from the (proposal) prior and evaluate several runs of the model \u02c6di for each sampled parameter\nvector. Due to the stochasticity of the model, each evaluation returns a different trace, for which a\nsummary statistic is calculated. This summary statistic reduces the dimensionality of the simulated\ntrace to a low dimensional vector. Based on this the loss function L(\u03b8) is calculated by comparing it\nto the summary statistic of the observed data. The best parameters are used to calculate a posterior,\nwhich is then used as a proposal prior in the next round (Fig. 2, pseudocode in Appendix E).\n\n4.1 Prior distributions and inference\n\nAs priors, we used normal distributions for all parameters except for \u03bbc (Table 1), where we used a\ngamma distribution (the conjugate prior to the Poisson distribution). Some parameters were bounded\ne.g. to the interval [0, 1] and their distributions renormalized to effectively truncate the priors.\nIn each inference round, we used Bayesian updating rules to calculate the posterior distribution [30,\n31] based on the j best parameters {\u03b8}. For example, in round n + 1, we updated the hyperparameters\nfor the multivariate normal distribution of the NL parameters, k and h, as:\n\n\u00b5n+1 =\n\n\u03ban\n\n\u03ban + j\n\n\u00b5n +\n\nj\n\n\u00af\u03b8\n\n\u03ban + j\n\u03banj\n\n\u03ban + j\n\n(\u00af\u03b8 \u2212 \u00b5n)(\u00af\u03b8 \u2212 \u00b5n)T ,\n\nwhere \u00af\u03b8 is the mean over the best parameters and S =(cid:80)j\n\n\u039bn+1 = \u039bn + S +\n\ni=1(\u03b8i \u2212 \u00af\u03b8)(\u03b8i \u2212 \u00af\u03b8)T the (unnormalized)\ncovariance of these parameters. The mean is thus updated as a weighted average of the prior mean\nand the mean of the best parameters, with weights speci\ufb01ed by \u03ba, which is updated as \u03ban+1 = \u03ban + j.\nThe posterior degrees of freedom \u03bdn+1, which is used to sample the covariance matrix \u03a3, is the prior\ndegrees of freedom plus the updating sample size: \u03bdn+1 = \u03bdn + j. With these updates we end up\nin a two step sampling procedure: \ufb01rst we draw the covariance \u03a3(n+1)i for each sample i of round\nn + 1 from the inverse-Wishart distribution Inv-Wishart(\u039b\u22121\nn+1, \u03bdn+1), and then we draw the samples\nfrom the normal distribution N (\u00b5n+1, \u03a3(n+1)i).\n\n5\n\n\fFigure 3: Results for synthetic data. A. Simulated traces for the synthetic data and simulations\nwith the recovered, \ufb01tted parameters in response to a binary light stimulus. B. The time course of\nthe mean and standard deviation of the different one dimensional marginal distributions over several\nrounds. Notice the asymmetric distribution for \u03bbc. See Appendix Fig. 8 for the two dimensional\nmarginals. C. Relative count for the different event types, error bars indicating \u00b1 std. D. Discrepancy\nof the data and \ufb01tted traces. The discrepancy is de\ufb01ned as the difference between the weighted\nsummary statistics of a single data trace and the remaining data (\u201cleave-one-out-procedure\u201d) and\naccordingly the difference between the weighted summary statistics of a single \ufb01tted trace and the\ndata. Error bars indicating \u00b1 std. E. The kernel of the linear stage. F. The non-linearity. Although its\nparameter k is not matched perfectly in (B), there is almost no difference between the \ufb01tted and the\ntrue non-linearities.\n\nThe parameters for the univariate normal distributions as well as for the \u0393-distribution are similarly\nupdated in a Bayesian manner (see Appendix D). The number of drawn and accepted parameters\nwas constant (20 \u00b7 103 and 10) except for the \ufb01rst round, where the number of drawn parameters was\ndoubled.\n\n4.2 Summary statistics and loss function\n\nAs a summary statistic, on which the discrepancy between different traces is de\ufb01ned, we used (1)\nthe histogram over the number of vesicles released in each event and (2) the Euclidean distance\nbetween the simulated and measured response trace, convolved with a Gaussian kernel (width:\n100 ms, inspired by [32]). The former proved especially useful in early rounds of inference. As\nexperiments typically consist of multiple repetitions of the same stimulus, we \ufb01rst calculated the\nsummary statistics s(di) for the individual traces di, normalized each entry by the summary statistic\nof the data traces and scaled it for its importance. This linear transformation is summarized in a\ndiagonal weight matrix W . We used the average euclidean distance of these weighted summary\nstatistics as the loss function L(\u03b8) (see also Appendix E and F). For n data traces di and a batchsize\nof m simulations \u02c6dj per parameter \u03b8, this yields:\n\n(cid:88)\n\ni,j\n\nL(\u03b8) =\n\n1\nnm\n\n||W s(di) \u2212 W s( \u02c6dj)||2 .\n\nThe (weighted) summary statistics can also be used to calculate the variability of the data and compare\nit to the summary statistics of the simulated data, giving us an estimate of the discrepancy between\nthe different traces (e.g. Fig. 3C, Fig. 5C).\n\n4.3 Runtime and complexity\n\nThe runtime of the presented ABC method is dominated by the forward simulations of the model,\nwith a complexity O(n) if n is the number of drawn samples. This complexity is similar to SNPE-B\n\n6\n\n\fFigure 4: Two-photon imaging of in vivo zebra\ufb01sh BCs allows for counting glutamatergic vesi-\ncles. A. Image of a zebra\ufb01sh BC expressing the Superfolder-iGluSnFR transgene. Dashed circles\nindicate active zones where glutamate is released. B. Experimental glutamate release traces as \u2206F/F\nof one OFF BC in two trials and extracted events in response to a binary light stimulus. Notice the\nhigh inter-trial variability.\n\n[6], which in addition requires training of a mixture density network, while we resort to analytic\nupdating formula. Although for expensive simulations, either strategy is often only a small fraction\nof the total run time, our method should be advantageous if the simulation is fast and the posterior\nunimodal. This direct estimation of the posterior stands in contrast to SNL [9] or BOLFI [8] where the\ninference of the posterior involves a second sampling step via MCMC which can be slow. In addition,\nBOLFI [8] uses a Gaussian process with complexity O(n3) in the vanilla version to approximate the\nlikelihood, which can be prohibitively slow.\n\n5 Results\n\n5.1 Model inference on synthetic data\n\nNext, we tested whether we can successfully infer the parameters of the mechanistic model with the\nprocedure outlined above. For that, we chose a realistic parameter setting and used the model to\ngenerate data. As the sample size per cell is severely limited in experimental data, we generated only\nfour traces of 140 seconds each (Fig. 3A). The light stimulus was the same binary noise stimulus that\nwe used for the experimental recordings.\nThe inference procedure proved very ef\ufb01cient: most parameters converged quickly to their true\nvalues with reasonable uncertainty (Fig. 3B). Only the slope parameter k of the non-linearity is\nunderestimated, likely because of the \"non-linear\" effect of k on the slope of the non-linearity and\nthe smaller prior mean. The method sets k to a value where a further increase woult not change\nsigni\ufb01cantly the output of the model (see also Fig. 3F). After inference, it is dif\ufb01cult to differentiate\nbetween the true and the \ufb01tted traces and the histogram over the number of vesicles released in each\ntime step can be \ufb01tted well (Fig. 3C). Indeed, simulations from the model where as similar to the\ndata as different data trials to one another (Fig. 3D). Our procedure identi\ufb01ed the time scale of the\nlinear receptive \ufb01eld as well as the non-linearity effectively (Fig. 3E and F). We validated the ef\ufb01cacy\nof our method also for other choices of parameters (not shown).\n\n5.2 Model inference on BC recordings from zebra\ufb01sh retina\n\nWe acquired two-photon imaging measurements of the glutamate release from BC axon terminals\nin the zebra\ufb01sh retina (n = 2 cells, see Fig. 4). Brie\ufb02y, linescans (1 x 120 pixels) were recorded at\n1 kHz across the terminal of a BC expressing the glutamate reporter Superfolder-iGluSnFr, while\nshowing a 140 second light stimulus (discrete Gaussian or binary noise) with a frame rate of 10\nHz. For each recording, a region of interest (ROI) was de\ufb01ned and the time series extracted as the\nweighted average of the spatial pro\ufb01le. Baseline drift was corrected, the traces were converted to\n\u2206F/F and deconvolution was done with the kinetics of the reporter. Release events were identi\ufb01ed\nas local maxima above a user-de\ufb01ned threshold in the imaging trace. The number of vesicles in each\nrelease event was estimated using a mixture of Gaussian model. For more details see [33].\nFigure 5A shows the LNR model \ufb01tted to four recordings from one OFF BC (total duration of the\nrecordings: 560 sec). We \ufb01nd that model parameters both for the LN-stage as well as the release stage\nof the model can be inferred ef\ufb01ciently. Posteriors converged quickly (Fig. 5B and Appendix Fig.\n\n7\n\n\fFigure 5: Results for experimental data. A. Two experimental data traces and simulated traces with\ncorresponding \ufb01tted parameters as well as two predictions from the GLM in response to a binary noise\nstimulus. B. Some parameters are more restricted than others by the model. (See Appendix Section I\nfor all parameters, the two dimensional marginals, and the corresponding kernel and non-linearity) C.\nRelative count for the different event types for the data and the models, inset on log-scale (mean\u00b1std).\nD. Discrepancy as in Fig. 3C. E. Temporal jitter of different event types in response to a binary light\nstimulus (mean\u00b1std, see Appendix Fig. 7 for the Gaussian noise stimulus). F. Cumulative release in\nresponse to a \u201ccalcium step\u201d (see Appendix Fig. 11 for a comparison to experimental data).\n\n9). Interestingly, parameters such as the ribbon-to-dock-transition rate pr, which model not directly\nobservable properties of the system, also had larger uncertainty estimates. Similar to the synthetic\ndata, the histogram over the number of vesicles released in each event was matched well overall (Fig.\n5C). In contrast to the synthetic data, the discrepancy among data traces was a bit smaller than the\ndiscrepancy between the model \ufb01t and the data traces (Fig. 5D). This is likely due to the fact that\nsome events were missed and the data contained more large amplitude events than predicted by the\nmodel (Fig. 5A, C).\nWe \ufb01nally tested whether the simple model captured two known properties of release events: the\ntemporal precision of events and the maximal release rates of the system. Interestingly, events with\nmany released vesicles were temporally more precise for both the data and the \ufb01tted model (Fig. 5E,\nF). As no summary statistic explicitly measured the temporal precision of different release event types\nat this resolution, this can be seen as evidence that our model captures crucial aspects of processing\nin BCs. Additionally, when comparing release rates with those recorded from electrically stimulated\ncells, we \ufb01nd the shape of cumulative vesicle release matches well with previously published results\n(Fig. 5F and Appendix J). This indicates that the model also extrapolates well to new stimuli.\n\n5.3 Comparison to a GLM\n\nWe compared the prediction performance of the LNR model to a generalized linear model (GLM)\n[2], a commonly used model in neural system identi\ufb01cation. Besides the stimulus term, it includes a\nself-feedback term and assumes Poisson noise (for details see Appendix K). In contrast to the LNR\nmodel, the GLM was not able to capture the MVR that is apparent in the data: The GLM did not\npredict events with more than \ufb01ve vesicles at all, and already events with more than two vesicles\nwere rare (Fig. 5A,C). This results in much larger discrepancies overall compared to the LNR model\n(Fig. 5D). The weights of the linear part for the release history captured the suppression of additional\nrelease after a release event partly, but could not model the full dynamics (Appendix Fig. 12C and\nFig. 5A). This shows that supplementing systems identi\ufb01cation models with biophysical components\ncan dramatically improve prediction accuracy, not only lead to more interpretable models.\n\n8\n\n\f6 Discussion\n\nHere we developed a Bayesian inference framework for the LNR model, a probabilistic model of\nvesicle release from BCs in the retina, which combines a systems identi\ufb01cation approach with a mech-\nanistic, biophysically inspired component. In contrast to purely statistical models, the parameters of\nthe LNR model are readily interpretable in terms of properties of the ribbon synapase. Speci\ufb01cally, we\nshow that its parameters can be \ufb01tted ef\ufb01ciently on synthetic data and two-photon imaging measure-\nments. The latter is remarkable, as mechanistic models often require highly specialized experiments\nto determine individual parameters. In this proof-of-principle study, we show that the parameters of\nthe LNR model can be simply inferred from the functional measurements, opening possibilities for\ninferring mechanistic models from large-scale imaging experiments e.g. for comparison across cell\ntypes.\nWe found that the data overall was able to constrain the parameters very well, for both the LN-stage\nand the release-stage of the LNR model. Parameters that referred to parts of the model that were not\ndirectly observed in our measurements (such as the transition probability from the ribbon to the dock\npr) were \ufb01tted with somewhat higher uncertainty, indicating that a larger range of parameter values\nwas compatible with the measurements. In addition, the LNR model captured MVR (the inferred\ncorrelation between vesicles is \u03c1 \u2248 0.35), despite the inherent variability at the level of the single\nsynapse. In addition, the LNR model captured trends in temporal precision within MVR events as\nwell as release rates to non-phyisiological stimuli such as electrical stimulation - neither of which\nwere used during inference.\nOur proposed framework for Bayesian inference in the LNR model is comparable to recent likelihood-\nfree inference methods (e.g. [6, 11]). In contrast to those, we do not use a mixture density network\n(MDN) to approximate the posterior distribution, but rather parametric distributions and analytic\nBayesian updating rules. In practice, MDNs can lead to unstable behavior for very small or large\nweights and sometimes have non-optimal extrapolation properties (but see [34]). Due to its simplicity,\nour method is less susceptible to such problems, but provides only a unimodal approximation of the\nposterior p(\u03b8|x = x0). For the LNR model, we rarely observe multimodality in the posterior, so our\nmethods yield a good and very fast approximation to the true posterior.\nWe combined a biophysically inspired mechanistic model with an ef\ufb01cient likelihood free inference\nmethod. This eases the development of more accurate and interpretable models, without the necessity\nof having closed-form likelihoods. At the same time, we could show that our model is able to capture\nthe variablity inherent to the neural system we studied and that taking biophysical constraints into\naccount can even dramatically improve prediction accuracy of system identi\ufb01cation models compared\nto standard systems identi\ufb01cation models. Taken together, the presented methods will allow for\nfurther investigations into more complex systems, gaining mechanistic insight into how neurons cope\nwith noise.\n\nAcknowledgments\n\nWe thank So\ufb01e-Helene Seibel for help in the experiments and genetics for this project and for\nthe BC image in Fig. 4A, Christian Behrens for providing the PR/BC schema in Fig. 1B, Jan\nLause for his detailed feedback on the manuscript. The study was funded by the German Ministry\nof Education and Research (BMBF, 01GQ1601, 01IS18052C and 01IS18039A) and the German\nResearch Foundation (BE5601/4-1, EXC 2064, project number 390727645). In addition, this project\nhas received funding from the European Union\u2019s Horizon 2020 research and innovation programme\nunder the Marie Sk\u0142odowska-Curie grant agreement No 674901 and the Wellcome Trust (Investigator\nAward 102905/Z/13/Z).\n\nReferences\n[1] Johnatan Aljadeff, Benjamin J. Lansdell, Adrienne L. Fairhall, and David Kleinfeld. Analysis of Neuronal\n\nSpike Trains, Deconstructed. Neuron, 91(2):221\u2013259, 2016.\n\n[2] Jonathan W. Pillow, Jonathon Shlens, Liam Paninski, Alexander Sher, Alan M. Litke, E. J. Chichilnisky, and\nEero P. Simoncelli. Spatio-temporal correlations and visual signalling in a complete neuronal population.\nNature, 454(7207):995\u2013999, 2008.\n\n9\n\n\f[3] Esteban Real, Hiroki Asari, Tim Gollisch, and Markus Meister. Neural Circuit Inference from Function to\n\nStructure. Current Biology, pages 1\u201310, 2017.\n\n[4] Michael A Sikora, Jon Gottesman, and Robert F Miller. A computational model of the ribbon synapse.\n\nJournal of neuroscience methods, 145(1-2):47\u201361, 2005.\n\n[5] Tianruo Guo, David Tsai, Siwei Bai, John W Morley, Gregg J Suaning, Nigel H Lovell, and Socrates\nDokos. Understanding the retina: A review of computational models of the retina from the single cell to\nthe network level. Critical ReviewsTM in Biomedical Engineering, 42(5), 2014.\n\n[6] Jan-Matthis Lueckmann, Pedro J Goncalves, Giacomo Bassetto, Kaan \u00d6cal, Marcel Nonnenmacher, and\nJakob H Macke. Flexible statistical inference for mechanistic models of neural dynamics. In Advances in\nNeural Information Processing Systems, pages 1289\u20131299, 2017.\n\n[7] Jarno Lintusaari, Michael U Gutmann, Ritabrata Dutta, Samuel Kaski, and Jukka Corander. Fundamentals\nand recent developments in approximate bayesian computation. Systematic biology, 66(1):e66\u2013e82, 2017.\n\n[8] Michael U Gutmann and Jukka Corander. Bayesian optimization for likelihood-free inference of simulator-\n\nbased statistical models. The Journal of Machine Learning Research, 17(1):4256\u20134302, 2016.\n\n[9] George Papamakarios, David C Sterratt, and Iain Murray. Sequential neural likelihood: Fast likelihood-free\n\ninference with autoregressive \ufb02ows. arXiv preprint arXiv:1805.07226, 2018.\n\n[10] Simon N Wood. Statistical inference for noisy nonlinear ecological dynamic systems. Nature,\n\n466(7310):1102, 2010.\n\n[11] George Papamakarios and Iain Murray. Fast \u03b5-free inference of simulation models with bayesian conditional\n\ndensity estimation. In Advances in Neural Information Processing Systems, pages 1028\u20131036, 2016.\n\n[12] Peter Sterling and Gary Matthews. Structure and function of ribbon synapses. Trends in neurosciences,\n\n28(1):20\u201329, 2005.\n\n[13] Leon Lagnado and Frank Schmitz. Ribbon synapses and visual processing in the retina. Annual review of\n\nvision science, 1:235\u2013262, 2015.\n\n[14] Matthew Holt, Anne Cooke, Andreas Neef, and Leon Lagnado. High Mobility of Vesicles Supports\n\nContinuous Exocytosis at a Ribbon Synapse. Current Biology, 2004.\n\n[15] Joshua H. Singer, Luisa Lassova, Noga Vardi, and Jeffrey S. Diamond. Coordinated multivesicular release\n\nat a mammalian ribbon synapse. Nature Neuroscience, 2004.\n\n[16] Tom Baden, Anton Nikolaev, Federico Esposti, Elena Dreosti, Benjamin Odermatt, and Leon Lagnado. A\n\nSynaptic Mechanism for Temporal Filtering of Visual Signals. PLoS Biology, 12(10), 2014.\n\n[17] Tom Baden, Thomas Euler, Matti Weckstr\u00f6m, and Leon Lagnado. Spikes and ribbon synapses in early\n\nvision. Trends in neurosciences, 36(8):480\u2013488, 2013.\n\n[18] L. LoGiudice and G. Matthews. The Role of Ribbons at Sensory Synapses. The Neuroscientist, 15(4):380\u2013\n\n391, 2009.\n\n[19] Liam Paninski, Jonathan Pillow, and Jeremy Lewi. Statistical models for neural encoding, decoding, and\n\noptimal stimulus design. Progress in brain research, 165:493\u2013507, 2007.\n\n[20] Katrin Franke, Philipp Berens, Timm Schubert, Matthias Bethge, Thomas Euler, and Tom Baden. Inhibition\n\ndecorrelates visual feature representations in the inner retina. Nature, 542(7642):439\u2013444, 2017.\n\n[21] Michael A. Sikora, Jon Gottesman, and Robert F. Miller. A computational model of the ribbon synapse.\n\nJournal of Neuroscience Methods, 145(1-2):47\u201361, 2005.\n\n[22] M. Avissar, A. C. Furman, J. C. Saunders, and T. D. Parsons. Adaptation Reduces Spike-Count Reliability,\n\nBut Not Spike-Timing Precision, of Auditory Nerve Responses. Journal of Neuroscience, 2007.\n\n[23] A. J. Peterson, D. R. F. Irvine, and P. Heil. A Model of Synaptic Vesicle-Pool Depletion and Replenishment\nCan Account for the Interspike Interval Distributions and Nonrenewal Properties of Spontaneous Spike\nTrains of Auditory-Nerve Fibers. Journal of Neuroscience, 2014.\n\n[24] Yusuf Ozuysal and Stephen A. Baccus. Linking the Computational Structure of Variance Adaptation to\n\nBiophysical Mechanisms. Neuron, 73(5):1002\u20131015, 2012.\n\n10\n\n\f[25] JL Schnapf, BJ Nunn, M Meister, and DA Baylor. Visual transduction in cones of the monkey macaca\n\nfascicularis. The Journal of physiology, 427(1):681\u2013713, 1990.\n\n[26] Masato Hisakado, Kenji Kitsukawa, and Shintaro Mori. Correlated binomial models and correlation\n\nstructures. Journal of Physics A: Mathematical and General, 39(50):15365\u201315378, 2006.\n\n[27] Gary Matthews and Paul Fuchs. The diverse roles of ribbon synapses in sensory neurotransmission. Nature\n\nReviews Neuroscience, 11(12):812, 2010.\n\n[28] Henrique Von Gersdorff, Eilat Vardi, Gary Matthews, and Peter Sterling. Evidence that vesicles on the\n\nsynaptic ribbon of retinal bipolar neurons can be rapidly released. Neuron, 1996.\n\n[29] Henrique Von Gersdorff and Gary Mathews. Dynamics of synaptic vesicle fusion and membrane retrieval\n\nin synaptic terminals. Nature, 367(6465):735, 1994.\n\n[30] Andrew Gelman, Hal S Stern, John B Carlin, David B Dunson, Aki Vehtari, and Donald B Rubin. Bayesian\n\ndata analysis. Chapman and Hall/CRC, 2013.\n\n[31] Kevin P Murphy. Conjugate bayesian analysis of the gaussian distribution. def, 1(2\u03c32):16, 2007.\n\n[32] S. Schreiber, J.M. Fellous, D. Whitmer, P. Tiesinga, and T.J. Sejnowski. A new correlation-based measure\n\nof spike timing reliability. Neurocomputing, 52-54:925\u2013931, 2003.\n\n[33] Ben James, L\u00e9a Darnet, Jos\u00e9 Moya-D\u00edaz, So\ufb01e-Helene Seibel, and Leon Lagnado. An amplitude code\n\ntransmits information at a visual synapse. Nature Neuroscience, 2019.\n\n[34] David Greenberg, Marcel Nonnenmacher, and Jakob Macke. Automatic posterior transformation for\nlikelihood-free inference. In Proceedings of the 36th International Conference on Machine Learning,\nvolume 97 of Proceedings of Machine Learning Research, pages 2404\u20132414. PMLR, 2019.\n\n[35] Jonathan S. Marvin, Benjamin Scholl, Daniel E. Wilson, Kaspar Podgorski, Abbas Kazemipour, Jo-\nhannes Alexander M\u00fcller, Susanne Schoch, Francisco Jos\u00e9 Urra Quiroz, Nelson Rebola, Huan Bao,\nJustin P. Little, Ariana N. Tkachuk, Edward Cai, Adam W. Hantman, Samuel S.H. Wang, Victor J. DePiero,\nBart G. Borghuis, Edwin R. Chapman, Dirk Dietrich, David A. DiGregorio, David Fitzpatrick, and Loren L.\nLooger. Stability, af\ufb01nity, and chromatic variants of the glutamate sensor iGluSnFR. Nature Methods,\n2018.\n\n[36] D. Zenisek, J. A. Steyer, and W. Almers. Transport, capture and exocytosis of single synaptic vesicles at\n\nactive zones. Nature, 2000.\n\n11\n\n\f", "award": [], "sourceid": 3822, "authors": [{"given_name": "Cornelius", "family_name": "Schr\u00f6der", "institution": "University of T\u00fcbingen"}, {"given_name": "Ben", "family_name": "James", "institution": "University of Sussex"}, {"given_name": "Leon", "family_name": "Lagnado", "institution": "University of Sussex"}, {"given_name": "Philipp", "family_name": "Berens", "institution": "University of T\u00fcbingen"}]}