{"title": "Characterizing Neural Gain Control using Spike-triggered Covariance", "book": "Advances in Neural Information Processing Systems", "page_first": 269, "page_last": 276, "abstract": "Spike-triggered averaging techniques are effective for linear characterization of neural responses. But neurons exhibit important nonlinear behaviors, such as gain control, that are not captured by such analyses. We describe a spike-triggered covariance method for retrieving suppressive components of the gain control signal in a neuron. We demonstrate the method in simulation and on retinal ganglion cell data. Analysis of physiological data reveals significant suppressive axes and explains neural nonlinearities. This method should be applicable to other sensory areas and modalities.", "full_text": "Characterizing neural gain control using\n\nspike-triggered covariance\n\nOdelia Schwartz\n\nE. J. Chichilnisky\n\nCenter for Neural Science\n\nSystems Neurobiology\n\nNew York University\nodelia@cns.nyu.edu\n\nThe Salk Institute\n\nej@salk.edu\n\nEero P. Simoncelli\n\nHoward Hughes Medical Inst.\n\nCenter for Neural Science\n\nNew York University\n\neero.simoncelli@nyu.edu\n\nAbstract\n\nSpike-triggered averaging techniques are effective for linear characteri-\nzation of neural responses. But neurons exhibit important nonlinear be-\nhaviors, such as gain control, that are not captured by such analyses.\nWe describe a spike-triggered covariance method for retrieving suppres-\nsive components of the gain control signal in a neuron. We demonstrate\nthe method in simulation and on retinal ganglion cell data. Analysis\nof physiological data reveals signi\ufb01cant suppressive axes and explains\nneural nonlinearities. This method should be applicable to other sensory\nareas and modalities.\n\nWhite noise analysis has emerged as a powerful technique for characterizing response prop-\nerties of spiking neurons. A sequence of stimuli are drawn randomly from an ensemble and\npresented in rapid succession, and one examines the subset that elicit action potentials. This\n\u201cspike-triggered\u201d stimulus ensemble can provide information about the neuron\u2019s response\ncharacteristics. In the most widely used form of this analysis, one estimates an excitatory\nlinear kernel by computing the spike-triggered average (STA); that is, the mean stimulus\nthat elicited a spike [e.g., 1, 2]. Under the assumption that spikes are generated by a\nPoisson process with instantaneous rate determined by linear projection onto a kernel fol-\nlowed by a static nonlinearity, the STA provides an unbiased estimate of this kernel [3].\nRecently, a number of authors have developed interesting extensions of white noise anal-\nysis. Some have examined spike-triggered averages in a reduced linear subspace of input\nstimuli [e.g., 4]. Others have recovered excitatory subspaces, by computing the spike-\ntriggered covariance (STC), followed by an eigenvector analysis to determine the subspace\naxes [e.g., 5, 6].\n\nSensory neurons exhibit striking nonlinear behaviors that are not explained by fundamen-\ntally linear mechanisms. For example, the response of a neuron typically saturates for large\namplitude stimuli; the response to the optimal stimulus is often suppressed by the presence\nof a non-optimal mask [e.g., 7]; and the kernel recovered from STA analysis may change\nshape as a function of stimulus amplitude [e.g., 8, 9]. A variety of these nonlinear behav-\niors can be attributed to gain control [e.g., 8, 10, 11, 12, 13, 14], in which neural responses\nare suppressively modulated by a gain signal derived from the stimulus. Although the un-\nderlying mechanisms and time scales associated with such gain control are current topics\nof research, the basic functional properties appear to be ubiquitous, occurring throughout\nthe nervous system.\n\n\f0\n\na\n\n0\n\nb\n\nk\n\n0\n\nFigure 1: Geometric depiction of spike-triggered analyses. a, Spike-triggered averaging\nwith two-dimensional stimuli. Black points indicate raw stimuli. White points indicate stim-\n\nsponds to their center of mass. b, Spike-triggered covariance analysis of suppressive axes.\n\nuli eliciting a spike, and the STA (black vector), which provides an estimate of \nShown are a set of stimuli lying on a plane perpendicular to the excitatory kernel, \n\nthe plane, stimuli eliciting a spike are concentrated in an elliptical region. The minor axis of\nthe ellipse corresponds to a suppressive stimulus direction: stimuli with a signi\ufb01cant compo-\nnent along this axis are less likely to elicit spikes. The stimulus component along the major\naxis of the ellipse has no in\ufb02uence on spiking.\n\n\u0001\u0003\u0002 , corre-\n\u0002 . Within\n\nHere we develop a white noise methodology for characterizing a neuron with gain control.\nWe show that a set of suppressive kernels may be recovered by \ufb01nding the eigenvectors of\nthe spike-triggered covariance matrix associated with smallest variance. We apply the tech-\nnique to electrophysiological data obtained from ganglion cells in salamander and macaque\nretina, and recover a set of axes that are shown to reduce responses in the neuron. More-\nover, when we \ufb01t a gain control model to the data using a maximum likelihood procedure\nwithin this subspace, the model accounts for changes in the STA as a function of contrast.\n\n1 Characterizing suppressive axes\n\n\u000e .\n\n\u0007\t\u000f\u0010\n\nAs in all white noise approaches, we assume that stimuli correspond to vectors, \u0004\n\n\ufb01nite-dimensional space (e.g., a neighborhood of pixels or an interval of time samples).\nWe assume a gain control model in which the probability of a stimulus eliciting a spike\ngrows monotonically with the halfwave-recti\ufb01ed projection onto an excitatory linear kernel,\n\n\u0005 , in some\n\n\u0007\t\b\u000b\n\nlinear kernels, \u000e\nFirst, we recover the excitatory kernel, \u0004\n\n\u0005\r\f , and is suppressively modulated by the fullwave-recti\ufb01ed projection onto a set of\n\b . This is achieved by presenting spherically sym-\n\nmetric input stimuli (e.g., Gaussian white noise) to the neuron and computing the STA\n(Fig. 1a). STA correctly recovers the excitatory kernel, under the assumption that each\nof the gain control kernels are orthogonal (or equal) to the excitatory kernel. The proof\nis essentially the same as that given for recovering the kernel of a linear model followed\nby a monotonic nonlinearity [3].\nIn particular, any stimulus can be decomposed into a\ncomponent in the direction of the excitatory kernel, and a component in a perpendicular\ndirection. This can be paired with another stimulus that is identical, except that its compo-\nnent in the perpendicular direction is negated. The two stimuli are equally likely to occur\nin a spherically Gaussian stimulus set (since they are equidistant from the origin), and they\nare equally likely to elicit a spike (since their excitatory components are equal, and their\nrecti\ufb01ed perpendicular components are equal). Their vector average lies in the direction of\nthe excitatory kernel. Thus, the STA (which is an average over all such stimuli, or all such\nstimulus pairs) must also lie in that direction. In a subsequent section we explain how to\n\n\u0001\n\u0006\n\u0004\n\u0004\n\u0004\n\u0004\n\u0005\n\u0007\n\fModel:\nExcitatory:\n\nRetrieved: \nExcitatory:\n\nEigenvalues: \n\nSuppressive:\n\nSuppressive:\n\nWeights\n\n1{\n1.5{\n2 {\n2.5{\n3 {\n\ni\n\nl\n\n)\ne\nu\na\nv\nn\ne\ng\ne\n(\n \ne\nc\nn\na\ni\nr\na\nV\n\n1\n\nArbitrary \n\n0\n\nAxis number\n\n350\n\nFigure 2: Estimation of kernels from a simulated model (equation 2). Left: Model kernels.\nRight: Sorted eigenvalues of covariance matrix of stimuli eliciting spikes (STC). Five eigen-\nvalues fall signi\ufb01cantly below the others. Middle: STA (excitatory kernel) and eigenvectors\n(suppressive kernels) associated with the lowest eigenvalues.\n\nrecover the excitatory kernel when it is not orthogonal to the suppressive kernels.\n\nNext, we recover the suppressive subspace, assuming the excitatory kernel is known. Con-\nsider the stimuli lying on a plane perpendicular to this kernel. These stimuli all elicit the\nsame response in the excitatory kernel, but they may produce different amounts of suppres-\nsion. Figure 1b illustrates the behavior in a three-dimensional stimulus space, in which one\naxis is assumed to be suppressive. The distribution of raw stimuli on the plane is spheri-\ncally symmetric about the origin. But the distribution of stimuli eliciting a spike is narrower\nalong the suppressive direction: these stimuli have a component along the suppressive axis\nand are therefore less likely to elicit a spike. This behavior is easily generalized from this\nplane to the entire stimulus space. If we assume that the suppressive axes are \ufb01xed, then\nwe expect to see reductions in variance in the same directions for any level of numerator\nexcitation.\n\nGiven this behavior of the spike-triggered stimulus ensemble, we can recover the suppres-\nsive subspace using principal component analysis. We construct the sample covariance\nmatrix of the stimuli eliciting a spike:\n\u0002\u0001\n\n(1)\n\n\u0004\u0006\u0005\b\u0007\n\n\u0005\u0015\u0014\b\u0016\n\n\u0005\f\u000b\u000e\r\u0010\u000f\n\n\u0011\u0013\u0012\n\nwhere \u0004\nthogonal to the estimated \u0004\nsubspace perpendicular to the estimated \u0004\n\nis the number of spikes. To ensure the estimated suppressive subspace is or-\n\u001c\u001e\u001d are \ufb01rst projected onto the\nthat are\nassociated with small variance (eigenvalues) correspond to directions in which the response\nof the neuron is modulated suppressively.\n\n\u0007\t\b . The principal axes (eigenvectors) of \n\n\u0005\u0015\u0017\u0019\u0018\u001b\u001a\n(as in Figure 1b), the stimuli \u0004\n\nWe illustrate the technique on simulated data for a neuron with a spatio-temporal receptive\n\ninput sequence are de\ufb01ned over a 18-sample spatial region and a 18-sample time window\n\n\u0005 of this\n\ufb01eld. The kernels are a set of orthogonal bandpass \ufb01lters. The stimulus vectors \u0004\n(i.e., a \u001f! #\" -dimensional space). Spikes are generated using a Poisson process with mean\n\nrate determined by a speci\ufb01c form of gain control [14]:\n\n$&%\n\n\u0007+*\n\n\u0005\u001e')(\n\n\u0005-,\n\n\u000f\u0010\n\nThe goal of simulation is to recover excitatory kernel \u0004\n\u0007\t\u000f\nby \u0004\n\n, and constant 3\n\n, weights 0\n\n\u000f10\n\n.\n\n\f\u0015.\n.1243\n\b , the suppressive subspace spanned\n\n.65\n\n(2)\n\n\u0003\n\u0003\n\t\n\n\u0004\n\u0005\n\u0004\n\u0005\n\u0007\n\b\n\u000e\n\u0004\n\u0001\n\u0006\n\u0004\n\u0007\n\b\n\n\u0004\n\u0005\n/\n\u000f\n\u000e\n\u0004\n\u0007\n\u0004\n\u0005\n\u000e\n\u0007\n\u000f\n\fRetrieved kernels:\n\nEigenvalues:\n\nExcitatory:\n\n0\n\nSuppressive:\n\nactual\n95 % confidence\n\n1\n\nArbitrary \n\nl\n\n)\ne\nu\na\nv\nn\ne\ng\ne\n(\n \n\ni\n\ne\nc\nn\na\ni\nr\na\nV\n\n0\n\nAxis number\n\n26\n\nFigure 3: Left: Retrieved kernels from STA and STC analysis of ganglion cell data from a\nsalamander retina (cell 1999-11-12-B6A). Right: sorted eigenvalues of the spike-triggered\ncovariance matrix, with corresponding eigenvectors. Low eigenvalues correspond to suppres-\nsive directions, while other eigenvalues correspond to arbitrary (ignored) directions. Raw\nstimulus ensemble was sphered (whitened) prior to analysis and low-variance axes under-\nrepresented in stimulus set were discarded.\n\nFigure 2 shows the original and estimated kernels for a model simulation with 600K input\nsamples and 36.98K spikes. First, we note that STA recovers an accurate estimate of the\nexcitatory kernel. Next, consider the sorted eigenvalues of \n, as plotted in Figure 2. The\nmajority of the eigenvalues descend gradually (the covariance matrix of the white noise\nsource should have constant eigenvalues, but remember that those in Figure 2 are computed\nfrom a \ufb01nite set of samples). The last \ufb01ve eigenvalues are signi\ufb01cantly below the values\none would obtain with randomly selected stimulus subsets. The eigenvectors associated\nwith these lowest eigenvalues span approximately the same subspace as the suppressive\nkernels. Note that some eigenvectors correspond to mixtures of the original suppressive\nkernels, due to non-uniqueness of the eigenvector decomposition. In contrast, eigenvectors\ncorresponding to eigenvalues in the gradually-descending region appear arbitrary in their\nstructure.\nFinally, we can recover the scalar parameters of this speci\ufb01c model (0\nthem to maximize the likelihood of the spike data according to equation (2). Note that a\ndirect maximum likelihood solution on the raw data would have been impractical due to\nthe high dimensionality of the stimulus space.\n\n\u000f and 3 ) by selecting\n\n2 Suppressive Axes in Retinal Ganglion Cells\n\nRetinal ganglion cells exhibit rapid [8, 15] as well as slow [9, 16, 17] gain control. We now\ndemonstrate that we can recover a rapid gain control signal by applying the method to data\nfrom salamander retina [9]. The input sequence consists of 80K time samples of full-\ufb01eld\n\n33Hz \ufb02ickering binary white noise (contrast 8.5%). The stimulus vectors \u0004\n\nare de\ufb01ned over a 60-segment time window. Since stimuli are \ufb01nite in number and binary,\nthey are not spherically distributed. To correct for this, we discard low-variance axes and\nwhiten the stimuli within the remaining axes.\n\n\u0005 of this sequence\n\nFigure 3 depicts the kernels estimated from the 623 stimulus vectors eliciting spikes. Sim-\nilar to the model simulation, the eigenvalues gradually fall off, but four of the eigenvalues\nappear to drop signi\ufb01cantly below the rest. To make this more concrete, we test the hy-\npothesis that the majority of the eigenvalues are consistent with those of randomly selected\n\nically, we perform a Monte Carlo simulation, drawing (with replacement) random subsets\n(orthogonal)\n\nstimulus vectors, but that the last \" eigenvalues fall signi\ufb01cantly below this range. Specif-\nof 623 stimuli from the full set of raw stimuli. We also randomly select \"\n\n\fb\n\n0. 5\n\n0\n\nl\n\ne\nn\nr\ne\nk\n \ny\nr\no\nt\na\nt\ni\nc\nx\ne\n \no\nt\nn\no\n \nn\no\ni\nt\nc\ne\no\nr\np\n\nj\n\na\n\n0. 5\n\n0\n\n-0. 5\n\n-0. 5\n0. 5\nprojection onto arbitrary kernel\n\n0\n\nl\n\ni\n\ne\nn\nr\ne\nk\n \ne\nv\ns\ns\ne\nr\np\np\nu\ns\n \no\nt\nn\no\n \nn\no\ni\nt\nc\ne\no\nr\np\n\nj\n\n-0. 5\n\n-0. 5\n0. 5\nprojection onto arbitrary kernel\n\n0\n\nFigure 4: Scatter plots from salamander ganglion cell data (cell 1999-11-12-B6A). Black\npoints indicate the raw stimulus set. White points indicate stimuli eliciting a spike. a, Pro-\njection of stimuli onto estimated excitatory kernel vs. arbitrary kernel. b, Projection of\nstimuli onto an estimated suppressive kernel vs. arbitrary kernel.\n\naxes, representing a suppressive subspace, and project this subspace out of the set of ran-\ndomly chosen stimuli. We then compute the eigenvalues of the sample covariance matrix\ntimes, and estimate a 95 percent con\ufb01dence interval\nfor each of the eigenvalues. The \ufb01gure shows that the \ufb01rst eigenvalues lie within the con\ufb01-\ndence interval. In practice, we repeat this process in a nested fashion, assuming initially no\ndirections are signi\ufb01cantly suppressive, then one direction, and so on up to four directions.\n\nof these stimuli. We repeat this \u0003\u0001\u0002\u0003\n\nThese low eigenvalues correspond to eigenvectors that are concentrated in recent time (as is\nthe estimated excitatory kernel). The remaining eigenvectors appear to be arbitrary, span-\nning the full temporal window. We emphasize that these kernels should not be interpreted\nto correspond to receptive \ufb01elds of individual neurons underlying the suppressive signal,\nbut merely provide an orthogonal basis for a suppressive subspace.\n\nWe can now verify that the recovered STA axis is in fact excitatory, and the kernels corre-\nsponding to the lowest eigenvalues are suppressive. Figure 4a shows a scatter plot of the\nstimuli projected onto the excitatory axis vs. an arbitrary axis. Spikes are seen to occur\nonly when the component along the excitatory axis is high, as expected. Figure 4b is a\nscatter plot of the stimuli projected onto one of the suppressive axes vs. an arbitrary (ig-\nnored) axis. The spiking stimuli lie within an ellipse, with the minor axis corresponding to\nthe suppressive kernel. This is exactly what we would expect in a suppressive gain control\nsystem (see Figure 1b).\n\nFigure 5 illustrates recovery of a two-dimensional suppressive subspace for a macaque reti-\nnal ganglion cell. The subspace was computed from the 36.43K stimulus vectors eliciting\nspikes out of a total of 284.74K vectors. The data are qualitatively similar to those of the\nsalamander cell, although both the strength of suppression and speci\ufb01c shapes of the scatter\nplots differs. In addition to suppression, the method recovers facilitation (i.e., high-variance\naxes) in some cells (not shown here).\n\n3 Correcting for Bias in Kernel Estimates\n\nThe kernels in the previous section were all recovered from stimuli of a single contrast.\nHowever, when the STA is computed in a ganglion cell for low and high contrast stimuli,\nthe low-contrast kernel shows a slower time course [9] (\ufb01gure 7,a). This would appear\ninconsistent with the method we describe, in which the STA is meant to provide an estimate\nof a single excitatory kernel. This behavior can be explained by assuming a model of the\nform given in equation 2, and in addition dropping the constraint that the gain control\nkernels are orthogonal (or identical) to the excitatory kernel.\n\n\f\u000f\u0011\u0010\u0013\u0012\n\u0014\u0016\u0015\n\u0019\u001b\u001a\u001d\u001c\u001d\u001e\n\n\u0010\u0013\u0017\u0004\u0010\u0004\u0018\n \u001d\u001f\n!\u000b\"\n#\u000b$\n\n%\u0011&\t'\t'\n\n(\u0013)*)\u0013\u001e\n\n+\u0013(\u001b$\n\na\n\nactual\n\n 95% confidence\n\n\u0002\u0001\u0004\u0003\n\n\u0005\u0007\u0006\t\b\u000b\n\u0007\f\t\r\u000b\u000e\n\n60\n\n0.5\n\n0\n\n-0. 5\n\nl\n\ne\nn\nr\ne\nk\n \ny\nr\no\nt\na\nt\ni\nc\nx\ne\n \no\nt\nn\no\n \nn\no\ni\nt\nc\ne\no\nr\np\n\nj\n\n1\n\n0\n\nl\n\ni\n\n)\ne\nu\na\nv\nn\ne\ng\ne\n(\n \ne\nc\nn\na\ni\nr\na\nV\n\nb\n\nl\n\nc\n\n0.5\n\n0\n\n-0. 5\n\ni\n\ne\nn\nr\ne\nk\n \ne\nv\ns\ns\ne\nr\np\np\nu\ns\n \no\nt\nn\no\n \nn\no\ni\nt\nc\ne\no\nr\np\n\nj\n\n0. 5\n0.5\nprojection onto arbitrary kernel\n\n0\n\n0. 5\n0.5\nprojection onto arbitrary kernel\n\n0\n\nFigure 5: a, Sorted eigenvalues of stimuli eliciting spikes from a macaque retina (cell 2001-\n09-29-E6A). b-c, Scatter plots of stimuli projected onto recovered axes.\n\nk0kk\n\nGain kernel\n\nSTA estimate\n\nFigure 6: Demonstration of estimator bias. When a gain control kernel is not orthogonal to\nthe excitatory kernel, the responses to one side of the excitatory kernel are suppressed more\nthan those on the other side. The resulting STA estimate is thus biased away from the true\n\nexcitatory kernel, \n\n\u0002 .\n\nFirst we show that when the orthogonality constraint is dropped, the STA estimate of the\nexcitatory kernel is biased by the gain control signal. Consider a situation in which a\nsuppressive kernel contains a component in the direction of the excitatory kernel,\n\n\b . We\n\u0001-,\nwrite \u0004\n\u000f equal to\n, produces a suppressive component along \u0004\na stimulus \u0004\n\u00072.\n250\n\u000f produces\na suppressive component of ,\n. . Thus, the two stimuli are equally likely\nto occur but not equally likely to elicit a spike. As a result, the STA will be biased in the\ndirection \u0007\n. Figure 6 illustrates an example in which a non-orthogonal suppressive axis\nbiases the estimate of the STA.\n\n\u0007/.\n\u0007/.\n, where \u0004\n\u00072.\n210\n. , but the corresponding paired stimulus vector \u0004\n\nis perpendicular to the excitatory kernel. Then, for example,\n\n, with 043\n\u0007\t\b\n\n\u0007/.\n\n\u00072.\n\n\u00072.\n\nNow consider the model in equation 2 in the presence of a non-orthogonal suppressive\nsubspace. Note that the bias is stronger for larger amplitude stimuli because the constant\nterm 3\n. dominates the gain control signal for weak stimuli. Indeed, we have previously\nhypothesized that changes in receptive \ufb01eld tuning can arise from divisive gain control\nmodels that include an additive constant [14].\n\nEven when the STA estimate is biased by the gain control signal, we can still obtain an\n(asymptotically) unbiased estimate of the excitatory kernel. Speci\ufb01cally, the true exci-\ntatory kernel lies within the subspace spanned by the estimated (biased) excitatory and\nsuppressive kernels. So, assuming a particular gain control model, we can again maximize\nthe likelihood of the data, but now allowing both the excitatory and suppressive kernels to\nmove within the subspace spanned by the initial estimated kernels. The resulting suppres-\n\n\u001f\n\"\n\u0001\n\u0004\n\u0007\n\u0007\n\u000f\n\u0004\n\u0007\n\b\n2\n\u0004\n\u000f\n\u000f\n\u0005\n\u0001\n\u0004\n\u0007\n\b\n\u0004\n\u000f\n\n\u0007\n,\n\u000e\n\u000e\n\u0004\n\u0007\n\b\n\u000e\n\u000e\n.\n\u000e\n\u000e\n\u0004\n\u000f\n\u000e\n\u000e\n\u0005\n\u0001\n\u0004\n\u0007\n\b\n\u0007\n0\n\u0004\n\u000e\n\u000e\n\u0004\n\u000e\n\u000e\n.\n\u0007\n0\n\u000e\n\u000e\n\u0004\n\u000f\n\u000e\n\u000e\n\u0004\n\u000f\n\fa\n\nb\n\n0.1\n\n0\n\n0.1\n\n0\n\nLow contrast STA\nHigh contrast STA\n\nTime preceding spike (sec)\n\n-0.5\n\n-1.8\n\n0\n\n-0.5\n\n-1.8\n\nLow contrast STA\nHigh contrast STA\n\nTime preceding spike (sec)\n\n0\n\nc\n(cid:0)(cid:1)\nExcitatory:\n\nSuppressive:\n\n{\n{\n{\n\n0.99\n\n0.97\n\n0.87\n\nWeights\n\n{ 0.52\n{\n\n0.46\n\nFigure 7: STA kernels estimated from low (8.5%) and high (34%) contrast salamander reti-\nnal ganglion cell data (cell 1999-11-12-B6A). Kernels are normalized to unit energy. a, STA\nkernels derived from ganglion cell spikes. b, STA kernels derived from simulated spikes\nusing ML-estimated model. c, Kernels and corresponding weights of ML-estimated model.\n\nsive kernels need not be orthogonal to the excitatory kernel.\n\nWe maximize the likelihood of the full two-contrast data set using a model that is a gener-\nalization of that given by equation (2):\n\n\u0007+*\n\n\u0005\u001e')(\n\n\u0005\u0015,\n\n\u000f10\n\n\f\u0001\n\n243\n\n(3)\n\n\u0001\u0004\u0003\n\nThe exponent '\nis incorporated to allow for more realistic contrast-response functions.\nThe excitatory axis is initially set to the STA and the suppressive axes are set to the\nlow-eigenvalue eigenvectors of the STC, along with the STA (e.g., to allow for self-\nsuppression). The recovered axes and weights are shown in Figure 7b, and remaining model\nparameters are: '\n. Whereas the axes recovered from the STA/STC\nanalysis are orthogonal, the axes determined during the maximum likelihood stage need not\nbe (and in the data example are not) orthogonal. Figure 7b also demonstrates that the \ufb01tted\nmodel accounts for the change in STA observed at different contrast levels. Speci\ufb01cally,\nwe simulate responses of the model (equation (3) with Poisson spike generation) on each\nof the two contrast stimulus sets, and then compute the STA based on these simulated spike\ntrains. Although it is based on a single \ufb01xed excitatory kernel, the model exhibits a change\nin STA shape as a function of contrast very much like the salamander neuron.\n\n, 3\n\n \b\u0007\n\n\u0003\u0006\u0005\n\n4 Discussion\n\nWe have described a spike-triggered covariance method for characterizing a neuron with\ngain control, and demonstrated the plausibility of the technique through simulation and\nanalysis of neural data. The suppressive axes recovered from retinal ganglion cell data\nappear to be signi\ufb01cant because: (1) As in the model simulation, a small number of eigen-\nvalues are signi\ufb01cantly below the rest; (2) The eigenvectors associated with these axes are\nconcentrated in a temporal region immediately preceding the spike, unlike the remaining\naxes; (3) Projection of the multi-dimensional stimulus vectors onto these axes reveal reduc-\ntions of spike probability; (4) The full model, with parameters recovered through maximum\nlikelihood, explains changes in STA as a function of contrast.\n\nModels of retinal processing often incorporate gain control [e.g., 8, 10, 15, 17, 18]. We\nhave shown for the \ufb01rst time how one can use white noise analysis to recover a gain con-\ntrol subspace. The kernels de\ufb01ning this subspace correspond to relatively short timescales.\nThus, it is interesting to compare the recovered subspace to models of rapid gain control.\nIn particular, Victor [15] proposed a retinal gain model in which the gain signal consists\n\n$\n%\n\u000e\n\u0004\n\u0001\n\u0006\n\u0004\n\u0007\n\b\n\n\u0004\n\u0005\n%\n/\n\u000f\n\u000e\n\u0004\n\u0007\n\u000f\n\n\u0004\n\u0005\n\u000e\n.\n,\n\n\u0002\n.\n\n5\n\u0001\n\n5\n\"\n\u0003\n\fof time-delayed copies of the excitatory kernel. In fact, for the cell shown in Figure 3,\nthe recovered suppressive subspace lies within the space spanned by shifted copies of the\nexcitatory kernel. The fact that we do not see evidence for slow gain control in the analysis\nmight indicate that these signals do not lie within a low-dimensional stimulus subspace. In\naddition, the analysis is not capable of distinguishing between physiological mechanisms\nthat could underlie gain control behaviors. Potential candidates may include internal bio-\nchemical adjustments, non-Poisson spike generation mechanisms, synaptic depression, and\nshunting inhibition due to other neurons.\n\nThis technique should be applicable to a far wider range of neural data than has been\nshown here. Future work will incorporate analysis of data gathered using stimuli that vary\nin both time and space (as in the simulated example of Figure 2). We are also exploring\napplicability of the technique to other visual areas.\n\nAcknowledgments We thank Liam Paninski and Jonathan Pillow for helpful discussions\nand comments, and Divya Chander for data collection.\n\nReferences\n[1] E deBoer and P Kuyper. Triggered correlation. In IEEE Transact. Biomed. Eng., volume 15,\n\npages 169\u2013179, 1968.\n\n[2] J P Jones and L A Palmer. The two-dimensional spatial structure of simple receptive \ufb01elds in\n\nthe cat striate cortex. J Neurophysiology, 58:1187\u201311211, 1987.\n\n[3] E J Chichilnisky. A simple white noise analysis of neuronal light responses. Network: Compu-\n\ntation in Neural Systems, 12(2):199\u2013213, 2001.\n\n[4] D L Ringach, G Sapiro, and R Shapley. A subspace reverse-correlation technique for the study\n\nof visual neurons. Vision Research, 37:2455\u20132464, 1997.\n\n[5] R de Ruyter van Steveninck and W Bialek. Coding and information transfer in short spike\n\nsequences. In Proc.Soc. Lond. B. Biol. Sci., volume 234, pages 379\u2013414, 1988.\n\n[6] B A Y Arcas, A L Fairhall, and W Bialek. What can a single neuron compute? In Advances in\n\nNeural Information Processing Systems, volume 13, pages 75\u201381, 2000.\n\n[7] M Carandini, D J Heeger, and J A Movshon. Linearity and normalization in simple cells of the\n\nmacaque primary visual cortex. Journal of Neuroscience, 17:8621\u20138644, 1997.\n\n[8] R M Shapley and J D Victor. The effect of contrast on the transfer properties of cat retinal\n\nganglion cells. J. Physiol. (Lond), 285:275\u2013298, 1978.\n\n[9] D Chander and E J Chichilnisky. Adaptation to temporal contrast in primate and salamander\n\nretina. J Neurosci, 21(24):9904\u20139916, 2001.\n\n[10] R Shapley and C Enroth-Cugell. Visual adaptation and retinal gain control. Progress in Retinal\n\nResearch, 3:263\u2013346, 1984.\n\n[11] R F Lyon. Automatic gain control in cochlear mechanics. In P Dallos et al., editor, The Me-\n\nchanics and Biophysics of Hearing, pages 395\u2013420. Springer-Verlag, 1990.\n\n[12] W S Geisler and D G Albrecht. Cortical neurons: Isolation of contrast gain control. Vision\n\nResearch, 8:1409\u20131410, 1992.\n\n[13] D J Heeger. Normalization of cell responses in cat striate cortex. Vis. Neuro., 9:181\u2013198, 1992.\n[14] O Schwartz and E P Simoncelli. Natural signal statistics and sensory gain control. Nature\n\nNeuroscience, 4(8):819\u2013825, August 2001.\n\n[15] J D Victor. The dynamics of the cat retinal X cell centre. J. Physiol., 386:219\u2013246, 1987.\n[16] S M Smirnakis, M J Berry, David K Warland, W Bialek, and M Meister. Adaptation of retinal\n\nprocessing to image contrast and spatial scale. Nature, 386:69\u201373, March 1997.\n\n[17] K J Kim and F Rieke. Temporal contrast adaptation in the input and output signals of salamander\n\nretinal ganglion cells. J. Neurosci., 21(1):287\u2013299, 2001.\n\n[18] M Meister and M J Berry. The neural code of the retina. Neuron, 22:435\u2013450, 1999.\n\n\f", "award": [], "sourceid": 1975, "authors": [{"given_name": "Odelia", "family_name": "Schwartz", "institution": null}, {"given_name": "E.J.", "family_name": "Chichilnisky", "institution": null}, {"given_name": "Eero", "family_name": "Simoncelli", "institution": null}]}