{"title": "An Estimation-Theoretic Framework for the Presentation of Multiple Stimuli", "book": "Advances in Neural Information Processing Systems", "page_first": 309, "page_last": 316, "abstract": null, "full_text": "An Estimation-Theoretic Framework for\n\nthe Presentation of Multiple Stimuli\n\nChristian W. Eurich(cid:3)\n\nInstitute for Theoretical Neurophysics\n\nUniversity of Bremen\n\nOtto-Hahn-Allee 1\n\nD-28359 Bremen, Germany\n\neurich@physik.uni-bremen.de\n\nAbstract\n\nA framework is introduced for assessing the encoding accuracy and\nthe discriminational ability of a population of neurons upon simul-\ntaneous presentation of multiple stimuli. Minimal square estima-\ntion errors are obtained from a Fisher information analysis in an\nabstract compound space comprising the features of all stimuli.\nEven for the simplest case of linear superposition of responses and\nGaussian tuning, the symmetries in the compound space are very\ndi(cid:11)erent from those in the case of a single stimulus. The analysis\nallows for a quantitative description of attentional e(cid:11)ects and can\nbe extended to include neural nonlinearities such as nonclassical\nreceptive (cid:12)elds.\n\n1\n\nIntroduction\n\nAn important issue in the Neurosciences is the investigation of the encoding proper-\nties of neural populations from their electrophysiological properties such as tuning\ncurves, background noise, and correlations in the (cid:12)ring. Many theoretical studies\nhave used estimation theory, in particular the measure of Fisher information, to ac-\ncount for the neural encoding accuracy with respect to the presentation of a single\nstimulus (e. g., [1, 2, 3, 4, 5]).\n\nMost modeling studies, however, neglect the fact that in a natural situation, neural\nactivity results from multiple objects or even complex sensory scenes. In particular,\nattention experiments require the presentation of at least one distractor along with\nthe attended stimulus. Electrophysiological data are now available demonstrating\ne(cid:11)ects of selective attention on neural (cid:12)ring behavior in various cortical areas [6,\n7, 8]. Such experiments require the development of theoretical tools which deviate\nfrom the usual practice of considering only single stimuli in the analysis. Zemel\net al.\n[9] employ an extended encoding scheme for stimulus distributions and use\nBayesian decoding to account for the presentation of multiple objects. Similarly,\nBayesian estimation has been used in the context of attentional phenomena [10].\n\n(cid:3)homepage: http://www-neuro.physik.uni-bremen.de/~eurich\n\n\fIn this paper, a new estimation-theoretic framework for the simultaneous presenta-\ntion of multiple stimuli is introduced. Fisher information is employed to compute\nlower bounds for the encoding error and the discrimational ability of neural popu-\nlations independent of a particular estimator. Here we focus on the simultaneous\npresentation of two objects in the context of attentional phenomena. Furthermore,\nwe assume a linearity in the neural response for reasons of analytical tractability;\nhowever, the method can be extended to include neural nonlinearities.\n\n2 Estimation Theory for Multiple Stimuli\n\n2.1 Tuning Curves in Compound Space\n\nThe tuning curve f (X ) of a neuron is de(cid:12)ned to be the average neural response to\nrepetitive presentations of stimulus con(cid:12)gurations X . In most cases, the response\nis taken to be the number n(X ) of action potentials occurring within some time\ninterval (cid:28) after stimulus presentation, or the neural (cid:12)ring rate r(X ) = n(X )=(cid:28) :\n\nf (X ) = hr(X )i = hn(X )i\n\n(cid:28)\n\n:\n\n(1)\n\nWithin an estimation-theoretic framework, the variability of the neural response is\ndescribed by a probability distribution conditioned on the value of X , P (n;X ). The\naverage h(cid:1)i in (1) can be regarded either as an average over multiple presentations\nof the same stimulus con(cid:12)guration (in an experimental setup), or as an average over\nn (in a theoretical description).\n\nIn most electrophysiological experiments, tuning curves are assessed through the\npresentation of a single stimulus, X = ~x, such as a bar or a grating characterized by\na single orientation, or a dot of light at a speci(cid:12)c position in the animal\u2019s visual (cid:12)eld\n(e.g., [11, 12]). Such tuning curves will be denoted by f1(~x), where the subscript\nrefers to the single object.\n\nThe behavior of a neuron upon presentation of multiple objects, however, cannot\nbe inferred from tuning curves f1(~x).\nInstead, neurons may show nonlinearities\nsuch as the so-called non-classical receptive (cid:12)elds in the visual area V1 which have\nattracted much attention in the recent past (e. g., [13, 14]). For M simultaneously\npresented stimuli, X = ~x1; : : : ; ~xM , the neuronal tuning curve can be written as a\nfunction fM (~x1; : : : ; ~xM ), where the subscript M is not necessarily a parameter of\nthe function but an indicator of the number of stimuli it refers to. The domain of\nthis function will be called the compound space of the stimuli.\n\nIn the following, we consider a speci(cid:12)c example consisting of two simultaneously\npresented stimuli, characterized by a single physical property (such as orientation\nor direction of movement). The resulting tuning function is therefore a function of\ntwo scalar variables x1 and x2: f2(x1; x2) = hr(x1; x2)i = hn(x1; x2)i=(cid:28) . Figure 1\nvisualizes the concept of the compound space.\n\nIn order to obtain an analytical access to the encoding properties of a neural pop-\nulation, we will furthermore assume that a neuron\u2019s response f2(x1; x2) is a linear\nsuperposition of the single-stimulus responses f1(x1) and f1(x2), i. e.,\n\nf2(x1; x2) = kf1(x1) + (1 (cid:0) k)f1(x2) ;\n\n(2)\n\nwhere 0 < k < 1 is a factor which scales the relative importance of the two stimuli.\nSuch linear behavior has been observed in area 17 of the cat upon presentation\nof bi-vectorial transparent motion stimuli [15] and in areas MT and MST of the\nmacaque monkey upon simultaneous presentation of two moving objects [16]. In\n\n\ff (x)\n\n1\n\nf (x ,x )\n\n2\n\n1\n\n2\n\nx\u2019\n\nx\u2019\u2019\n\nx\n\nx\u2019\n\nx1\n\nx\u2019\u2019\n\nx2\n\nFigure 1: The concept of compound space. A single-stimulus tuning curve f1(x)\n(left) yields the average response to the presentation of either x0 or x00; the simulta-\nneous presentation of x0 and x00, however, can be formalized only through a tuning\ncurve f2(x1; x2) (right).\n\ngeneral, however, the compound space method is not restricted to linear neural\nresponses.\n\nThe consideration of a neural population in the compound space yields tuning\nproperties and symmetries which are very di(cid:11)erent from those in a D-dimensional\nsingle-stimulus space considered in the literature (e. g., [2, 3, 4]). First, the tuning\ncurves have a di(cid:11)erent appearance. Figure 2a shows a tuning curve f2(x1; x2) given\nby (2), where f1(x) is a Gaussian,\n\nf1(x) = F exp(cid:26)(cid:0)\n\n(x (cid:0) c)2\n2(cid:27)2 (cid:27) ;\n\n(3)\n\nF is a gain factor which can be scaled to be the maximal (cid:12)ring rate of the neuron.\nf2(x1; x2) is not radially symmetric but has cross-shaped level curves. Second, a\n\nf (x ,x )\n\n2\n\n1\n\n2\n\n1.2\n\n1\n\n0.8\n\n0.6\n\n0.4\n\n0.2\n\n8\n\nf (x)\n\n1\n\nx2\n\nc\n\nx\n\n6\n\nx2\n\n4\n\n(a)\n\n4\n\n6\n\nx1\n\n2\n\n2\n\n8\n\n(b)\n\n(c,c)\n\nx1\n\nFigure 2: (a) A tuning curve f2(x1; x2) in a 2-dimensional compound space given\nby (2) and (3) with k = 0:5, c = 5, (cid:27) = 0:3, F = 1. (b) Arrangement of tuning\ncurves: The centers of the tuning curves are restricted to the diagonal x1 = x2. The\ncross is a schematic cross-section of the tuning curve in (a).\n\nsingle-stimulus tuning curve f1(x) whose center is located at x = c yields a linear\nsuperposition whose center is given by the vector (c; c) in the compound space.\nThis is due to the fact that both axes describe the same physical stimulus feature.\nTherefore, all tuning curve centers are restricted to the 1-dimensional subspace\n\n\fx1 = x2. The tuning curve centers are assumed to have a distribution in the\ncompound space which can be written as\n\n~(cid:17)(c1; c2) = (cid:26) 0\n\n(cid:17)(c)\n\nif c1 6= c2\nif c1 = c2\n\n:\n\n(4)\n\nThe geometrical features in the compound space suggest that an estimation-\ntheoretic approach will yield encoding properties of neural populations which are\ndi(cid:11)erent from those obtained from the presentation of a single stimulus.\n\n2.2 Fisher Information\n\nIn order to assess the encoding accuracy of a neural population, the stochasticity\nof the neural response is taken into account. For N neurons, it is formalized as the\nprobability of obtaining n(i) spikes in the i-th neuron (i = 1 : : : ; N ) as a response to\nthe stimulus con(cid:12)guration X , P (n(1); n(2); : : : ; n(N );X ) (cid:17) P (~n;X ). Here we assume\nindependent spike generation mechanisms in the neurons:\n\nP (n(1); n(2); : : : ; n(N );X ) =\n\nN\n\nYi=1\n\nP (n(i);X ) :\n\n(5)\n\nThese parameter-dependent distributions are obtained either experimentally or\nthrough a noise model; a convenient choice for the latter is a Poisson distribution\nwith a spike count average given by the tuning curve (1) of each neuron.\nIn the 2-dimensional compound space discussed in the previous section, P (~n;X ) (cid:17)\nP (~n; x1; x2). The Fisher information is a 2 (cid:2) 2 matrix J(x1; x2) = (Jij (x1; x2))\n(i; j 2 f1; 2g), whose entries are given by\n@\nln P (~n; x1; x2))(\n@xj\n\nln P (~n; x1; x2))(cid:29) (i; j 2 f1; 2g) :\n\n(6)\n\nJij(x1; x2) = (cid:28)(\n\n@\n@xi\n\nThe Cram(cid:19)er-Rao inequality states that a lower bound on the expected square es-\ni;min (i=1,2), is given by (J (cid:0)1)ii provided that\ntimation error of the ith feature, (cid:15)2\nthe estimator is unbiased.\nIn the following, this lower bound is studied in the\n2-dimensional compound space.\n\n3 Results\n\nSingle-neuron Fisher Information. The single-neuron Fisher information in\nthe compound space can be written down for an arbitrary noise model. Here we\nchoose a Poissonian spike distribution,\n\nP (n; x1; x2) =\n\n((cid:28) f2(x1; x2))n expf(cid:0)(cid:28) f2(x1; x2)g\n\nn!\n\n;\n\n(7)\n\nwhereby the tuning is assumed to be linear according to (2), and the single-stimulus\ntuning curve f1(x) is a Gaussian given by (3). A straightforward calculation yields\nthe single-neuron Fisher information matrix J c(x1; x2) = (J c\nij (x1; x2)) (i; j 2 f1; 2g)\ngiven by\n\nJ c(x1; x2) =\n\n(cid:27)4(cid:26)ke(cid:0)\nk2(x1 (cid:0) c)2e(cid:0)\n\n0\n@\n\n(cid:28) F\n\n(x1 (cid:0)c)2\n\n2(cid:27)2 + (1 (cid:0) k)e(cid:0)\n\n(x1 (cid:0)c)2\n\n(cid:27)2\n(x1 (cid:0)c)2 +(x2(cid:0)c)2\n\n2(cid:27)2\n\n(x2 (cid:0)c)2\n\n2(cid:27)2 (cid:27) (cid:2)\nk(1 (cid:0) k)(x1 (cid:0) c)(x2 (cid:0) c)e(cid:0)\n(1 (cid:0) k)2(x2 (cid:0) c)2e(cid:0)\n\n(8)\n\n1\nA ;\n\n(x1 (cid:0)c)2 +(x2(cid:0)c)2\n\n2(cid:27)2\n\n(x2 (cid:0)c)2\n\n(cid:27)2\n\nk(1 (cid:0) k)(x1 (cid:0) c)(x2 (cid:0) c)e(cid:0)\n\nthe index c refers to the center (c; c) of the tuning curve.\n\n\fPopulation Fisher Information. For independently spiking neurons (5), the\npopulation Fisher information is the sum of the single-neuron Fisher information\nvalues. Assuming some density (cid:17)(c) of tuning curve centers on the diagonal x1 = x2,\nthe population Fisher information is therefore obtained by an integration of (8).\nHere we consider the simple case of a constant density, (cid:17)(c) (cid:17) (cid:17)0 resulting in\nelements Jij (x1; x2) (i; j 2 f1; 2g) of the Fisher information maxtrix given by\n\nJij (x1; x2) = (cid:17)\n\n1\n\nZ\n\nJ c\nij (x1; x2)dc :\n\n(cid:0)1\n\n(9)\n\nA symmetry with respect to the diagonal x1 = x2 allows the replacement of the\ntwo variable x1, x2 by a single variable (cid:26) visualized in Fig. 3. It is straightforward\n\nx2\n\n(\n\nx +x1\n\n,\n2\n\n2\n\nx +x1\n\n2\n\n2\n\n)\n\n(\n\n)-r\n\nr\n\n(\nx ,x1\n\n2\n\n)\n\nx1\n\nFigure 3: Transformation to the variable (cid:26)\nwhich is proportional to the distance of the\npoint (x1; x2) to the diagonal.\n(cid:26) therefore\nquanti(cid:12)es the similarity of the stimuli x1 and\nx2.\n\nto obtain two additional symmetries, J12((cid:26)) = J21((cid:26)) and J11((cid:26)) = J11((cid:0)(cid:26)). The\n(cid:12)nal population Fisher information is given by\n\nJ((cid:26)) = (cid:18) J11((cid:26))\n\nJ12((cid:26))\n\n(1(cid:0)k)2\n\nJ12((cid:26))\n\nk2 J11((cid:26)) (cid:19) ;\n\nwhereby\n\nJ11((cid:26)) =\n\nJ12((cid:26)) =\n\nk2(cid:28) F (cid:17)\n\n(cid:27)\n\n1\n\nZ\n\n(cid:0)1\n\nk(1 (cid:0) k)(cid:28) F (cid:17)\n\n(cid:27)\n\n((cid:24) + (cid:26)\n2 ((cid:24) + (cid:26)\n\n(cid:27) )2 expf(cid:0)((cid:24) + (cid:26)\n(cid:27) )2g\n(cid:27) )2g + (1 (cid:0) k) expf(cid:0) 1\n2 (((cid:24) + (cid:26)\n\n((cid:24) + (cid:26)\nk expf(cid:0) 1\n\n(cid:27) )((cid:24) (cid:0) (cid:26)\n2 ((cid:24) + (cid:26)\n\n(cid:27) ) expf(cid:0) 1\n(cid:27) )2g + (1 (cid:0) k) expf(cid:0) 1\n\nd(cid:24) ;\n\n2 ((cid:24) (cid:0) (cid:26)\n(cid:27) )2g\n(cid:27) )2 + ((cid:24) (cid:0) (cid:26)\n2 ((cid:24) (cid:0) (cid:26)\n\n(cid:27) )2)g\n(cid:27) )2g\n\nk expf(cid:0) 1\nZ\n\n1\n\n(cid:0)1\n\nIn the following, three examples will be discussed.\n\n3.1 Example 1: Symmetrical Tuning\n\n(10)\n\nd(cid:24) :\n\nFirst we study the symmetrical case k = 1=2 the receptive (cid:12)elds of which are given\nin Fig. 2a. Fig. 4 shows the minimal square estimation error for x1, (cid:15)2\n1;min((cid:26)), as\nobtained from the (cid:12)rst diagonal element of the inverse Fisher information matrix.\nDue to the symmetry, it is identical to the minimal square error for x2, (cid:15)2\n2;min((cid:26)).\nThe estimation error diverges as (cid:26) (cid:0)! 0. This can be understood as follows: For\nk = 1=2, the matrix (10) is symmetric and can be diagonalized. The eigenvector\ndirections are\n\n~v1 =\n\n1\n\n1 (cid:19)\np2 (cid:18) 1\n\n~v2 =\n\n1\n\np2 (cid:18) (cid:0)1\n\n1 (cid:19) :\n\n(11)\n\nCorrespondingly, the diagonal Fisher information matrix yields a lower bound for\nthe estimation errors of (x1 + x2)=p2 and (x2 (cid:0) x1)=p2, respectively. The results\nare shown in Fig. 5. The estimation error for (x1 + x2)=p2 takes a (cid:12)nite value for\n\n\f)\nr\n(\nn\nm\ne\n\ni\n\n2\n\n20\n\n15\n\n10\n\n5\n\nFigure 4: Minimal square estimation error for\nstimulus x1 or x2. Solid line: F = 1; dotted\nline: F = 1:5.In both cases, k = 0:5, (cid:27) = 1,\n(cid:28) = 1, (cid:17) = 1.\n\n0\n\n-4\n\n-2\n\n0\nr\n\n2\n\n4\n\n1.5\n\n(a)\n\ndirection\n\n2\n\nx +x1\n21/2\n\n)\nr\n(\nn\nm\ne\n\ni\n\n2\n\n1\n\ndirection\n\n1\n\nx -x2\n21/2\n\n(b)\n\n20\n\n15\n\n10\n\n5\n\n)\nr\n(\nn\nm\ne\n\ni\n\n2\n\n0.5\n\n-4\n\n-2\n\n0\nr\n\n2\n\n4\n\n0\n\n-4\n\n-2\n\n0\nr\n\n2\n\n4\n\nFigure 5: Minimal square estimation error for (a) (x1 + x2)=p2 and (b) (x2 (cid:0)\nx1)=p2. Solid lines: F = 1; dotted lines: F = 1:5. Same parameters as in Fig. 4.\n\nall %. However, the estimation error for (x2 (cid:0) x1)=p2 diverges as (cid:26) (cid:0)! 0. This\ninformation for (x2 (cid:0) x1)=p2 can be regarded as a discrimination measure which\n\nerror corresponds to an estimation of the di(cid:11)erence of the two presented stimuli.\nAs expected, a discrimination becomes impossible as the stimuli merge. The Fisher\n\ntakes the simultaneous presentation of stimuli into account.\n\n3.2 Example 2: Attention on Both Stimuli\n\nElectrophysiological studies in V1 and V4 [7] and MT [8] of macaque monkeys\nsuggest that the gain but not the width of tuning curves is increased as stimuli in\na cell\u2019s receptive (cid:12)eld are attended. This can easily be incorporated in the current\nmodel: The gain corresponds to the factor F in the tuning curve (3). Figures 4\nand 5 compare the results obtained in the previous section (F = 1) with a maximal\n(cid:12)ring rate F = 1:5. As expected, the minimal square errors are smaller for higher\nF in all cases (dotted lines); a higher (cid:12)ring rate yields a better stimulus estimation.\nThis suggests that attention increases localization accuracy of x1 and x2 as well\nas their discrimination if both stimuli are attended. The former is consistent with\npsychophysical results on attentional enhancement of spatial resolution in human\nsubjects [17].\n\n3.3 Example 3: Attending One Stimulus\n\nThe situation changes if only one of the two stimuli is attended. Electrophysiolog-\nical recordings in monkey area V4 suggest that upon presentation of two stimuli\ninside a neuron\u2019s receptive (cid:12)eld, the in(cid:13)uence of the attended stimulus increases\nas compared to the unattended one [6]. In our framework, this situation can be\nconsidered by increasing the weight factor of the attended stimulus in the linear\n\n\fsuperposition (2). Here we study the case k = 0:75 corresponding to attending\nstimulus x1. The resulting tuning curve shows characteristic distortions as com-\npared to the symmetrical case k = 0:5 (Fig. 6a). The Fisher information analysis\n\nf (x ,x )\n2\n2\n\n1\n\n(a)\n\n1.2\n\n1\n\n0.8\n\n0.6\n\n0.4\n\n0.2\n\n8\n\ndirection\n\n1\n\nx -x2\n21/2\n\n(b)\n\n20\n\n15\n\n10\n\n5\n\n)\nr\n(\nn\nm\ne\n\ni\n\n2\n\n6\n\nx2\n\n4\n\n4\n\n2\n\n2\n\n8\n\n6\n\nx1\n\n0\n\n-4\n\n-2\n\n0\nr\n\n2\n\n4\n\nFigure 6: Neural encoding for one attended stimulus. (a) Tuning curve (2), (3) for\nk = 0:75, i. e., stimulus x1 is attended. All other parameters as in Fig. 1a. (b)\n\nMinimal square estimation errors for the direction (x2 (cid:0) x1)=p2 resulting from a\n\nrotated Fisher information matrix. Solid line: k = 0:5 as in Fig. 5b; dotted line:\nk = 0:75. F = 1, all other parameters as in Fig. 4.\n\nreveals that the attended stimulus x1 yields a smaller minimal square estimation\nerror than it does in the non-attention case k = 0:5 whereas the minimal square\nerror for the unattended stimulus x2 is increased (data not shown). Figure 6b shows\n\nthe minimal square error for the di(cid:11)erence of the stimuli, (x2 (cid:0) x1)=p2. The min-\n\nimal estimation error becomes larger as compared to k = 0:5. This result can be\ninterpreted as follows: Attending stimulus x1 yields a better encoding of x1 but\na worse encoding of x2. The latter results in the larger estimation error for the\n\ndi(cid:11)erence (x2 (cid:0) x1)=p2 of the stimulus values. This can be interpreted as a worse\n\ndiscriminational ability: In a psychophysical experiment, subjects attending stim-\nulus x1 will have only a crude representation of the unattended stimulus x2 will\ntherefore yield a performance which is worse as compared to the situation where\nboth stimuli are processed in the same way. This is a prediction resulting from the\npresented framework.\n\n4 Summary and Discussion\n\nA method was introduced to account for the encoding of multiple stimuli by popu-\nlations of neurons. Estimation theory was performed in a compound space whose\naxes are de(cid:12)ned by the features of each stimulus. Here we studied a speci(cid:12)c exam-\nple of linear neurons with Gaussian tuning and Poissonian spike statistics to gain\ninsight into the symmetries in the compound space and the interpretation of the\nresulting estimation errors. The approach allows for a detailed consideration of at-\ntention e(cid:11)ects on the neural level [7, 8, 6]. The method can be extended to include\nnonlinear neural behavior as multiple stimuli are presented; see e. g. [13, 14], where\nthe response of single neurons to two orientation stimuli cannot be easily inferred\nfrom the neural behavior in the case of only one stimulus. More experimental and\ntheoretical work has to be done in order to account for the psychophysical perfor-\nmance under the in(cid:13)uence of attention as it has been measured, for example, in [17].\nFor this purpose, the presented approach has to be related to classical measures in\n\n\fdiscrimination and same-di(cid:11)erent tasks. From theoretical considerations in the case\nof a single stimulus [2, 3, 4, 5] it is well known that the encoding accuracy of a neu-\nral population may depend on various properties such as the number of encoded\nfeatures, the noise model, and the correlations in the neural activity. The in(cid:13)uence\nof such factors within the presented framework is currently under investigation.\n\nAcknowledgments\n\nI wish to thank Shun-ichi Amari, Hiroyuki Nakahara, Anthony Marley and Stefan\nWilke for stimulating discussions. Part of this paper was written during my stay at\nthe RIKEN institute. I also acknowledge support from SFB 517, Neurocognition.\n\nReferences\n\n[1] M. A. Paradiso, A theory for the use of visual orientation information which exploits\n\nthe columnar structure of striate cortex, Biol. Cybern. 58 (1988) 35{49.\n\n[2] K. Zhang and T. J. Sejnowski, Neuronal tuning: to sharpen or broaden? Neural\n\nComp. 11 (1999) 75{84.\n\n[3] C. W. Eurich and S. D. Wilke, Multidimensional encoding strategy of spiking neurons,\n\nNeural Comp. 12 (2000) 1519{1529.\n\n[4] S. D. Wilke and C. W. Eurich, Representational accuracy of stochastic neural popu-\n\nlations, Neural Comp. 14 (2001) 155{189.\n\n[5] H. Nakahara, S. Wu and S.-i. Amari, Attention modulation of neural tuning through\n\npeak and base rate, Neural Comp. 13 (2001) 2031{2047.\n\n[6] J. Moran and R. Desimone, Selective attention gates visual processing in the extras-\n\ntriate cortex, Science 229 (1985) 782{784.\n\n[7] C. J. McAdams and J. H. R. Maunsell, E(cid:11)ects of attention on orientation-tuning\nfunctions of single neurons in macaque cortical area V4, J. Neurosci. 19 (1999) 431{\n441.\n\n[8] S. Treue and J. C. Mart(cid:19)(cid:16)netz Trujillo, Feature-based attention in(cid:13)uences motion pro-\n\ncessing gain in macaque visual cortex, Nature 399 (1999) 575{579.\n\n[9] R. S. Zemel, P. Dayan and A. Pouget, Probabilistic interpretation of population codes,\n\nNeural Comp. 10 (1998) 403{430.\n\n[10] P. Dayan and R. S. Zemel, Statistical models and sensory attention, in: D. Willshaw\nund A. Murray (eds), Procedings of the Ninth International Conference on Arti(cid:12)cial\nNeural Networks, ICANN 99, Venue, University of Edinburgh (1999) 1017{1022.\n\n[11] D. H. Hubel and T. Wiesel, Receptive (cid:12)elds and functional architecture of monkey\n\nstriate cortex, J. Physiol. 195 (1968) 215{244.\n\n[12] N. V. Swindale (1998), Orientation tuning curves: empirical description and estima-\n\ntion of parameters, Biol. Cybern. 78 (1998) 45{56.\n\n[13] J. J. Knierim und D. van Essen, Neuronal responses to static texture patterns in area\n\nV1 of the alert macaque monkey, J. Neurophysiol. 67 (1992) 961{979.\n\n[14] A. M. Sillito, K. Grieve, H. Jones, J. Cudeiro und J. Davies, Visual cortical mecha-\n\nnisms detecting focal orientation discontinuities, Nature 378 (1995) 492{496.\n\n[15] R. J. A. van Wezel, M. J. M. Lankheet, F. A. J. Verstraten, A. F. M. Mar(cid:19)ee and\nW. A. van de Grind, Responses of complex cells in area 17 of the cat to bi-vectorial\ntransparent motion, Vis. Res. 36 (1996) 2805{2813.\n\n[16] G. H. Recanzone, R. H. Wurtz and U. Schwarz, Responses of MT and MST neurons\nto one and two moving objects in the receptive (cid:12)eld, J. Neurophysiol. 78 (1997)\n2904{2915.\n\n[17] Y. Yeshurun and M. Carrasco, Attention improves or impairs visual performance by\n\nenhancing spatial resolution, Nature 396 (1998) 72{75.\n\n\f", "award": [], "sourceid": 2270, "authors": [{"given_name": "Christian", "family_name": "Eurich", "institution": null}]}