{"title": "A Bayesian Model of Conditioned Perception", "book": "Advances in Neural Information Processing Systems", "page_first": 1409, "page_last": 1416, "abstract": "We propose an extended probabilistic model for human perception. We argue that in many circumstances, human observers simultaneously evaluate sensory evidence under different hypotheses regarding the underlying physical process that might have generated the sensory information. Within this context, inference can be optimal if the observer weighs each hypothesis according to the correct belief in that hypothesis. But if the observer commits to a particular hypothesis, the belief in that hypothesis is converted into subjective certainty, and subsequent perceptual behavior is suboptimal, conditioned only on the chosen hypothesis. We demonstrate that this framework can explain psychophysical data of a recently reported decision-estimation experiment. The model well accounts for the data, predicting the same estimation bias as a consequence of the preceding decision step. The power of the framework is that it has no free parameters except the degree of the observer's uncertainty about its internal sensory representation. All other parameters are defined by the particular experiment which allows us to make quantitative predictions of human perception to two modifications of the original experiment.", "full_text": "A Bayesian Model of Conditioned Perception\n\nAlan A. Stocker\n\n\u2217\n\nand Eero P. Simoncelli\n\nHoward Hughes Medical Institute,\n\nCenter for Neural Science,\n\nand Courant Institute of Mathematical Sciences\n\nNew York University\n\nNew York, NY-10003, U.S.A.\n\nWe argue that in many circumstances, human observers evaluate sensory evidence\nsimultaneously under multiple hypotheses regarding the physical process that has\ngenerated the sensory information. In such situations, inference can be optimal if\nan observer combines the evaluation results under each hypothesis according to\nthe probability that the associated hypothesis is correct. However, a number of ex-\nperimental results reveal suboptimal behavior and may be explained by assuming\nthat once an observer has committed to a particular hypothesis, subsequent evalu-\nation is based on that hypothesis alone. That is, observers sacri\ufb01ce optimality in\norder to ensure self-consistency. We formulate this behavior using a conditional\nBayesian observer model, and demonstrate that it can account for psychophysical\ndata from a recently reported perceptual experiment in which strong biases in per-\nceptual estimates arise as a consequence of a preceding decision. Not only does\nthe model provide quantitative predictions of subjective responses in variants of\nthe original experiment, but it also appears to be consistent with human responses\nto cognitive dissonance.\n\n1 Motivation\n\nIs the glass half full or half empty? In different situations, the very same perceptual evidence (e.g. the\nperceived level of liquid in a glass) can be interpreted very differently. Our perception is conditioned\non the context within which we judge the evidence. Perhaps we witnessed the process of the glass\nbeing \ufb01lled, and thus would more naturally think of it as half full. Maybe it is the only glass on\nthe table that has liquid remaining, and thus its precious content would be regarded as half full. Or\nmaybe we simply like the content so much that we cannot have enough, in which case we may view\nit as being half empty.\n\nContextual in\ufb02uences in low-level human perception are the norm rather than the exception, and\nhave been widely reported. Perceptual illusions, for example, often exhibit particularly strong con-\ntextual effects, either in terms of perceptual space (e.g. spatial context affects perceived brightness;\nsee [1] for impressive examples) or time (prolonged exposure to an adaptor stimulus will affect\nsubsequent perception, see e.g. the motion after-effect [2]). Data of recent psychophysical exper-\niments suggest that an observer\u2019s previous perceptual decisions provide additional form of context\nthat can substantially in\ufb02uence subsequent perception [3, 4]. In particular, the outcome of a categor-\nical decision task can strongly bias a subsequent estimation task that is based on the same stimulus\npresentation. Contextual in\ufb02uences are typically strongest when the sensory evidence is most am-\nbiguous in terms of its interpretation, as in the example of the half-full (or half-empty) glass.\n\nBayesian estimators have proven successful in modeling human behavior in a wide variety of low-\nlevel perceptual tasks (for example: cue-integration (see e.g. [5]), color perception (e.g. [6]), visual\nmotion estimation (e.g. [7, 8])). But they generally do not incorporate contextual dependencies\n\n\u2217\n\ncorresponding author.\n\n\fbeyond a prior distribution (re\ufb02ecting past experience) over the variable of interest. Contextual\ndependencies may be incorporated in a Bayesian framework by assuming that human observers,\nwhen performing a perceptual task, test different hypotheses about the underlying structure of the\nsensory evidence, and arrive at an estimate by weighting the estimates under each hypothesis ac-\ncording to the strength of their belief in that hypothesis. This approach is known as optimal model\nevaluation [9], or Bayesian model averaging [10] and has been previously suggested to account for\ncognitive reasoning [11]. It further has been suggested that the brain could use different neuro-\nmodulators to keep track of the probabilities of individual hypotheses [12]. Contextual effects are\nre\ufb02ected in the observer\u2019s selection and evaluation of these hypotheses, and thus vary with exper-\nimental conditions. For the particular case of cue-integration, Bayesian model averaging has been\nproposed and tested against data [13, 14], suggesting that some of the observed non-linearities in\ncue integration are the result of the human perceptual system taking into account multiple potential\ncontextual dependencies.\n\nIn contrast to these studies, however, we propose that model averaging behavior is abandoned once\nthe observer has committed to a particular hypothesis. Speci\ufb01cally, subsequent perception is condi-\ntioned only on the chosen hypothesis, thus sacri\ufb01cing optimality in order to achieve self-consistency.\nWe examine this hypothesis in the context of a recent experiment in which subjects were asked to\nestimate the direction of motion of random dot patterns after being forced to make a categorical\ndecision about whether the direction of motion fell on one side or the other of a reference mark [4].\nDepending on the different levels of motion coherence, responses on the estimation task were heav-\nily biased by the categorical decision. We demonstrate that a self-consistent conditional Bayesian\nmodel can account for mean behavior, as well as behavior on individual trials [8]. The model has es-\nsentially no free parameters, and in addition is able to make precise predictions under a wide variety\nof alternative experimental arrangements. We provide two such example predictions.\n\n2 Observer Model\n\nWe de\ufb01ne perception as a statistical estimation problem in which an observer tries to infer the value\nof some environmental variable s based on sensory evidence m (see Fig. 1). Typically, there are\nsources of uncertainty associated with m, including both sensor noise and uncertainty about the\nrelationship between the sensory evidence and the variable s. We refer to the latter as structural\nuncertainty which represents the degree of ambiguity in the observer\u2019s interpretation of the physical\nworld. In cases where the structural possibilities are discrete, we denote them as a set of hypotheses\nH = {h1, ..., hN}. Perceptual inference requires two steps. First, the observer computes their belief\n\nworld\n\nmeasurement\n\ns\n\nproperty\n\nm\n\nnoise!\n\np(H|m)\n\nh\n\n1\n\n.\n.\n.\n\nh\nn\n\nhypotheses\n\nobserver\n\np(s|m)\n\nestimate\n\n^\n\ns(m)\n\nprior \nknowledge\n\nFigure 1: Perception as conditioned inference problem. Based on noisy sensory measurements\nm the observer generates different hypotheses for the generative structure that relates m to the\nstimulus variable s. Perception is a two-fold inference problem: Given the measurement and prior\nknowledge, the observer generates and evaluates different structural hypotheses h i. Conditioned on\nthis evaluation, they then infer an estimate \u02c6s(m) from the measurement m.\n\nin each hypothesis for given sensory evidence m. Using Bayes\u2019 identity, the belief is expressed as\n\n\fthe posterior\n\np(H|m) = p(m|H)p(H)\n\n.\n\n(1)\nSecond, for each hypothesis, a conditional posterior is formulated as p(s|m, H = h i), and the full\n(non-conditional) posterior is computed by integrating the evidence over all hypotheses, weighted\nby the belief in each hypothesis h i:\n\np(m)\n\nN(cid:1)\n\np(s|m) =\n\np(s|m, H = hi)p(H = hi|m) .\n\n(2)\n\nFinally, the observer selects an estimate \u02c6s that minimizes the expected value (under the posterior)\nof an appropriate loss function 1.\n\ni=1\n\n2.1 Decision leads to conditional estimation\n\nIn situations where the observer has already made a decision (either explicit or implicit) to select one\nhypothesis as being correct, we postulate that subsequent inference will be based on that hypothesis\nalone, rather than averaging over the full set of hypotheses. For example, suppose the observer\nselects the maximum a posteriori hypothesis hMAP, the hypothesis that is most probable given the\nsensory evidence and the prior distribution. We assume that this decision then causes the observer\nto reset the posterior probabilities over the hypotheses to\n\np(H|m) = 1,\n= 0,\n\nif H = hMAP\notherwise.\n\n(3)\n\nThat is, the decision making process forces the observer to consider the selected hypothesis as\ncorrect, with all other hypotheses rendered impossible. Changing the beliefs over the hypotheses\nwill obviously affect the estimate \u02c6s in our model. Applying the new posterior probabilities Eq. (3)\nsimpli\ufb01es the inference problem Eq. (2) to\n\np(s|m) = p(s|m, H = hMAP) .\n\n(4)\n\nWe argue that this simpli\ufb01cation by decision is essential for complex perceptual tasks (see Discus-\nsion). By making a decision, the observer frees resources, eliminating the need to continuously\nrepresent probabilities about other hypotheses, and also simpli\ufb01es the inference problem. The price\nto pay is that the subsequent estimate is typically biased and sub-optimal.\n\n3 Example: Conditioned Perception of Visual Motion\n\nWe tested our observer model by simulating a recently reported psychophysical experiment [4].\nSubjects in this experiment were asked on each trial to decide whether the overall motion direction\nof a random dot pattern was to the right or to the left of a reference mark (as seen from the \ufb01xation\npoint). Low levels of motion coherence made the decision task dif\ufb01cult for motion directions close\nto the reference mark. In a subset of randomly selected trials subjects were also asked to estimate the\nprecise angle of motion direction (see Fig. 2). The decision task was always preceding the estimation\ntask, but at the time of the decision, subjects were unaware whether they would had to perform the\nestimation task or not.\n\n3.1 Formulating the observer model\n\nWe denote \u03b8 as the direction of coherent motion of the random dot pattern, and m the noisy sensory\nmeasurement. Suppose that on a given trial the measurement m indicates a direction of motion to\nthe right of the reference mark. The observer can consider two hypotheses H = {h 1, h2} about the\nactual physical motion of the random dot pattern: Either the true motion is actually to the right and\nthus in agreement with the measurement, or it is to the left but noise has disturbed the measurement\n\n1For the purpose of this paper, we assume a standard squared error loss function, in which case the observer\n\nshould choose the mean of the posterior distribution.\n\n\fdecision\n\nestimation\n\nreference\n\n?\n\n?\n\nreference\n\ns\nl\na\ni\nr\nt\n\n?\n?\n\n?\n\na\n\nb\n\nFigure 2: Decision-estimation experiment.\n(a) Jazayeri and Movshon presented moving random\ndot patterns to subjects and asked them to decide if the overall motion direction was either to the\nright or the left of a reference mark [4]. Random dot patterns could exhibit three different levels of\nmotion coherence (3, 6, and 12%) and the single coherent motion direction was randomly selected\nfrom a uniform distribution over a symmetric range of angles [\u2212\u03b1, \u03b1] around the reference mark. (b)\nIn randomly selected 30% of trials, subjects were also asked, after making the directional decision,\nto estimate the exact angle of motion direction by adjusting an arrow to point in the direction of\nperceived motion. In a second version of the experiment, motion was either toward the direction of\nthe reference mark or in the opposite direction.\n\nsuch that it indicates motion to the right. The observer\u2019s belief in each of the two hypotheses based\non their measurement is given by the posterior distribution according to Eq. (1), and the likelihood\n\np(m|H) =\n\np(m|\u03b8, H)p(\u03b8|H)d\u03b8 .\n\n(5)\n\n(cid:2) \u03c0\n\n\u2212\u03c0\n\nThe optimal decision is to select the hypothesis hMAP that maximizes the posterior given by Eq. (1).\n\n3.2 Model observer vs. human observer\n\nThe subsequent conditioned estimate of motion direction then follows from Eq. (4) which can be\nrewritten as\n\np(\u03b8|m) = p(m|\u03b8, H = hMAP)p(\u03b8|H = hMAP)\n\n.\n\np(m|H = hMAP)\n\n(6)\nis completely characterized by three quantities: The likelihood functions p(m|\u03b8, H),\nThe model\nthe prior distributions p(\u03b8|H) of the direction of motion given each hypothesis, and the prior on the\nhypotheses p(H) itself (shown in Fig. 3). In the given experimental setup, both prior distributions\nwere uniform but the width parameter of the motion direction \u03b1 was not explicitly available to\nthe subjects and had to be individually learned from training trials. In general, subjects seem to\nover-estimate this parameter (up to a factor of two), and adjusting its value in the model accounts\nfor most of the variability between subjects. The likelihood functions p(m|\u03b8, H) is given by the\nuncertainty about the motion direction due to the low motion coherence levels in the stimuli and the\nsensory noise characteristics of the observer. We assumed it to be Gaussian with a width that varies\ninversely with the coherence level. Values were estimated from the data plots in [4].\n\nFigure 4 compares the prediction of the observer model with human data. Trial data of the model\nwere generated by \ufb01rst sampling a hypothesis h(cid:2)\naccording to p(H), then drawing a stimulus direc-\ntion from p(\u03b8|H = h(cid:2)). then picking a sensory measurement sample m according to the conditional\nprobability p(m|\u03b8, H = h(cid:2)), and \ufb01nally performing inference according to Eqs. (1) and (6). The\nmodel captures the characteristics of human behavior in both the decision and the subsequent es-\ntimation task. Note the strong in\ufb02uence of the decision task on the subsequent estimation of the\nmotion direction, effectively pushing the estimates away from the decision boundary.\n\nWe also compared the model with a second version of the experiment, in which the decision task\nwas to discriminate between motion toward and away from the reference [4]. Coherent motion of\nthe random dot pattern was uniformly sampled from a range around the reference and from a range\n\n\fp(m|\u03b8, H)\n\np(H)\n\n12 %\n\n6 %\n\n\u2212\u03b1\n\n3 %\n\n\u03b1\n\n0.5\n\n0.5\n\np(\u03b8|H)\n\n\u2212\u03b1\n\n\u03b1\n\n\u03b8\n\nFigure 3: Ingredients of the conditional observer model. The sensory signal is assumed to be\ncorrupted by additive Gaussian noise, with width that varies inversely with the level of motion\ncoherence. Actual widths were approximated from those reported in [4]. The prior distribution\nover the hypotheses p(H) is uniform. The two prior distributions over motion direction given each\nhypothesis, p(\u03b8|H = h1,2), are again determined by the experimental setup, and are uniform over\nthe range [0,\u00b1\u03b1].\n\naround the direction opposite to the reference, as illustrated by the prior distributions shown in Fig. 5.\nAgain, note that these distributions are given by the experiment and thus, assuming the same noise\ncharacteristics as in the \ufb01rst experiment, the model has no free parameters.\n\n3.3 Predictions\n\nThe model framework also allows us to make quantitative predictions of human perceptual behavior\nunder conditions not yet tested. Figure 6 shows the model observer\u2019s behavior under two modi\ufb01-\ncations of the original experiment. The \ufb01rst is identical to the experiment shown in Fig. 4 but with\nunequal prior probability on the two hypotheses. The model predicts that a human subject would\nrespond to this change by more frequently choosing the more likely hypothesis. However, this hy-\npothesis would also be more likely to be correct, and thus the estimates under this hypothesis would\nexhibit less bias than in the original experiment.\n\nThe second modi\ufb01cation is to add a second reference and ask the subject to decide between three\ndifferent classes of motion direction (e.g. left, central, right). Again, the model predicts that in such\na case, a human subject\u2019s estimate in the central direction should be biased away from both decision\nboundaries, thus leading to an almost constant direction estimate. Estimates following a decision in\nfavor of the two outer classes show the same repulsive bias as seen in the original experiment.\n\n4 Discussion\n\nWe have presented a normative model for human perception that captures the conditioning effects\nof decisions on an observer\u2019s subsequent evaluation of sensory evidence. The model is based on\nthe premise that observers aim for optimal inference (taking into account all sensory evidence and\nprior information), but that they exhibit decision-induced biases because they also aim to be self-\nconsistent, eliminating alternatives that have been decided against. We\u2019ve demonstrated that this\nmodel can account for the experimental results of [4].\n\nAlthough this strategy is suboptimal (in that it does not minimize expected loss), it provides two\nfundamental advantages. First, self-consistency would seem an important requirement for a stable\ninterpretation of the environment, and adhering to it might outweigh the disadvantages of perceptual\nmisjudgments. Second, framing perception in terms of optimal statistical estimation implies that the\nmore information an observer evaluates, the more accurately they should be able to solve a percep-\ntual task. But this assumes that the observer can construct and retain full probability distributions\nand perform optimal inference calculations on these. Presumably, accumulating more probabilistic\nevidence of more complex conditional dependencies has a cost, both in terms of storage, and in terms\nof the computational load of performing subsequent inference. Thus, discarding information after\nmaking a decision can help to keep this storage and the computational complexity at a manageable\nlevel, freeing computational resources to perform other tasks.\n\n\fdata\n\nmodel\n\ndata\n\nmodel\n\n \n\ne\nc\nn\ne\nr\ne\nf\ne\nr\n \nf\no\n \nt\nh\ng\ni\nr\n\n \n\n \n\nn\no\ni\nt\no\nm\nn\no\ni\nt\nc\na\nr\nf\n\n1\n\n0.5\n\n0\n\n]\ng\ne\nd\n[\n \nn\no\ni\nt\nc\ne\nr\ni\nd\nd\ne\nt\na\nm\n\n \n\ni\nt\ns\ne\n\n20\n\n10\n\n0\n\n-10\n\n-20\n\ncoherence level\n\n3 %\n6 %\n12 %\n\n-20\n\n-10\n\n0\n\n10\n\n20\n\n-20\n\n-10\n\n0\n\n10\n\n20\n\n-20\n\n-10\n\n0\n\n10\n\n20\n\n-20\n\n-10\n\n0\n\n10\n\n20\n\n]\ng\ne\nd\n[\n \nn\no\ni\nt\nc\ne\nr\ni\nd\nd\ne\nt\na\nm\n\n \n\ni\nt\ns\ne\n\n40\n\n20\n\n0\n\n-20\n\n-40\n\n40\n\n20\n\n0\n\n-20\n\n-40\n\n40\n\n20\n\n0\n\n-20\n\n-40\n\n3 %3 %\n\n6 %6 %\n\n-20\n\n-10\n\n0\n\n10\n\n12 %12 %\n20\n\n-20\n\n-10\n\n0\n\n10\n\n20\n\ntrue direction [deg]\n\ntrue direction [deg]\n\nFigure 4: Comparison of model predictions with data for a single subject. Upper left: The two pan-\nels show the percentage of observed motion to the right as a function of the true pattern direction,\nfor the three coherence levels tested. The model accurately predicts the subject\u2019s behavior, which\nexhibits a decrease in the number of false decisions with decreasing noise levels and increasing dis-\ntance to the reference. Lower left: Mean estimates of the direction of motion after performing the\ndecision task. Clearly, the decision has a substantial impact on the subsequent estimate, producing\na strong bias away from the reference. The model response exhibits biases similar to those of the\nhuman subjects, with lower coherence levels producing stronger repulsive effects. Right: Grayscale\nimages show distributions of estimates across trials for both the human subject and the model ob-\nserver, for all three coherence levels. All trials are included (correct and incorrect). White dashed\nlines represent veridical estimates. Model observer performed 40 trials at each motion direction (in\n1.5 degrees increments). Human data are replotted from [4].\n\nAn interesting avenue for exploration is the implementation of such an algorithm in neural substrate.\nRecent studies propose a means by which population of neurons can represent and multiply proba-\nbility distributions [15]. It would be worthwhile to consider how the model presented here could be\nimplemented with such a neural mechanism. In particular, one might expect that the sudden change\nin posterior probabilities over the hypotheses associated with the decision task would be re\ufb02ected in\nsudden changes in response pattern in such populations [16].\n\nQuestions remain. For the experiment we have modeled, the hypotheses were speci\ufb01ed by the two\nalternatives of the decision task, and the subjects were forced to choose one of them. What hap-\npens in more general situations? First, do humans always decompose perceptual inference tasks\ninto a set of inference problems, each conditioned on a different hypothesis? Data from other,\ncue-combination experiments suggest that subjects indeed seem to perform such probabilistic de-\ncomposition [13, 14]. If so, then how do observers generate these hypotheses? In the absence of\nexplicit instructions, humans may automatically perform implicit comparisons relative to reference\nfeatures that are unconsciously selected from the environment. Second, if humans do consider dif-\nferent hypotheses, do they always select a single one on which subsequent percepts are conditioned,\neven if not explicitly asked to do so? For example, simply displaying the reference mark in the\nexperiment of [4] (without asking the observer to report any decision) might be suf\ufb01cient to trigger\nan implicit decision that would result in behaviors similar to those shown in the explicit case.\n\nFinally, although we have only tested it on data of a particular psychophysical experiment, we be-\nlieve that our model may have implications beyond low-level sensory perception. For instance, a\n\n\fdata\n\nmodel\n\np(H)\n\n0.5\n\n0.5\n\np(\u03b8|H)\n\u2212\u03b1\n\n\u03b1\n\n\u03b8\n\n]\ng\ne\nd\n[\n \n \n\nn\no\ni\nt\nc\ne\nr\ni\nd\nd\ne\nt\na\nm\n\n \n\ni\nt\ns\ne\n\n40\n\n20\n\n0\n\n-20\n\n-40\n\n40\n\n20\n\n0\n\n-20\n\n-40\n\n40\n\n20\n\n0\n\n-20\n\n-40\n\n-20\n\n-10\n\n0\n\n3 %3 %\n\n6 %6 %\n\n12 %12 %\n20\n\n10\ntrue direction [deg]\n\n-10\n\n-20\n\n0\n\n10\n\n20\n\nFigure 5: Comparison of model predictions with data for second experiment. Left: Prior distri-\nbutions for second experiment in [4]. Right: Grayscale images show the trial distributions of the\nhuman subject and the model observer for all three coherence levels. White dashed lines represent\nveridical estimates. Note that the human subject does not show any signi\ufb01cant bias in their estimate.\nThe trial variance appears to increase with decreasing levels of coherence. Both characteristics are\nwell predicted by the model. Human data replotted from [4] (supplementary material).\n\nwell-studied human attribute is known as cognitive dissonance [17], which causes people to ad-\njust their opinions and beliefs to be consistent with their previous statements or behaviors. 2 Thus,\nself-consistency may be a principle that governs computations throughout the brain.\n\nAcknowledgments\n\nWe thank J. Tenenbaum for referring us to the cognitive dissonance literature, and J. Pillow, N. Daw,\nD. Heeger, A. Movshon, and M. Jazayeri for interesting discussions.\n\nReferences\n[1] E.H. Adelson. Perceptual organization and the judgment of brightness. Science, 262:2042\u20132044, Decem-\n\nber 1993.\n\n[2] S.P. Thompson. Optical illusions of motion. Brain, 3:289\u2013298, 1880.\n[3] S. Baldassi, N. Megna, and D.C. Burr. Visual clutter causes high-magnitude errors. PLoS Biology,\n\n4(3):387ff, March 2006.\n\n[4] M. Jazayeri and J.A. Movshon. A new perceptual illusion reveals mechanisms of sensory decoding.\n\nNature, 446:912ff, April 2007.\n\n[5] M.O. Ernst and M.S. Banks. Humans integrate visual and haptic information in a statistically optimal\n\nfashion. Nature, 415:429ff, January 2002.\n\n[6] D. Brainard and W. Freeman. Bayesian color constancy. Journal of Optical Society of America A,\n\n14(7):1393\u20131411, July 1997.\n\n2An example that is directly analogous to the perceptual experiment in [4] is documented in [18]: Subjects\ninitially rated kitchen appliances for attractiveness, and then were allowed to select one as a gift from amongst\ntwo that they had rated equally. They were subsequently asked to rate the appliances again. The data show a\nrepulsive bias of the post-decision ratings compared with the pre-decision ratings, such that the rating of the\nselected appliance increased, and the rating of the rejected appliance decreased.\n\n\fp(H)\n\n0.8\n\n0.2\n\np(\u03b8|H)\n\n\u2212\u03b1\n\n\u03b1\n\n\u03b8\n\n\u2212\u03b2\n\n\u03b2\n\n1/3\n\n1/3\n\n1/3\n\n\u2212\u03b2\n\n\u03b2\n\n\u2212\u03b1\n\n\u03b1\n\n\u03b8\n\nA\n\nB\n\n40\n\n20\n\n0\n\n-20\n\n-40\n\n40\n\n20\n\n0\n\n]\ng\ne\nd\n[\n \n \n\nn\no\ni\nt\nc\ne\nr\ni\nd\nd\ne\nt\na\nm\n\n \n\ni\nt\ns\ne\n\n-20\n\n-40\n\n-20\n\n-10\n\n0\n\ntrial\n\nmean\n\n40\n\n20\n\n0\n\n-20\n\n-40\n\n40\n\n20\n\n0\n\n-20\n\n-40\n\n20\n\n-10\n10\ntrue direction [deg]\n\n-20\n\n0\n\n10\n\n20\n\nFigure 6: Model predictions for two modi\ufb01cations of the original experiment. A: We change the\nprior probability p(H) to be asymmetric (0.8 vs. 0.2). However, we keep the prior distribution\nof motion directions given a particular side p(\u03b8|H) constant within the range [0,\u00b1\u03b1]. The model\nmakes two predictions (trials shown for an intermediate coherence level): First, although tested with\nan equal number of trials for each motion direction, there is a strong bias induced by the asymmetric\nprior. And second, the direction estimates on the left are more veridical than on the right. B: We\npresent two reference marks instead of one, asking the subjects to make a choice between three\nequally likely regions of motion direction. Again, we assume uniform prior distributions of motion\ndirections within each area. The model predicts bilateral repulsion of the estimates in the central\narea, leading to a strong bias that is almost independent of coherence level.\n\n[7] Y. Weiss, E. Simoncelli, and E. Adelson. Motion illusions as optimal percept. Nature Neuroscience,\n\n5(6):598\u2013604, June 2002.\n\n[8] A.A. Stocker and E.P. Simoncelli. Noise characteristics and prior expectations in human visual speed\n\nperception. Nature Neuroscience, pages 578\u2013585, April 2006.\n\n[9] D. Draper. Assessment and propagation of model uncertainty. Journal of the Royal Statistical Society B,\n\n57:45\u201397, 1995.\n\n[10] J.A. Hoeting, D. Madigan, A.E. Raftery, and C.T. Volinsky. Bayesian model averaging: A tutorial. Sta-\n\ntistical Science, 14(4):382\u2013417, 1999.\n\n[11] T.L. Grif\ufb01ths, C. Kemp, and J. Tenenbaum. Handbook of Computational Cognitive Modeling, chapter\n\nBayesian models of cognition. Cambridge University Press, to appear.\n\n[12] J.A. Yu and P. Dayan. Uncertainty, neuromodulation, and attention. Neuron, 46:681ff, May 2005.\n[13] D. Knill. Robust cue integration: A Bayesian model and evidence from cue-con\ufb02ict studies with stereo-\n\nscopic and \ufb01gure cues to slant. Journal of Vision, 7(7):1\u201324, May 2007.\n\n[14] K. K\u00a8ording and J. Tenenbaum. Causal inference in sensorimotor integration. In B. Sch\u00a8olkopf, J. Platt,\n\nand T. Hoffman, editors, Advances in Neural Information Processing Systems 19. MIT Press, 2007.\n\n[15] W.J. Ma, J.M. Beck, P.E. Latham, and A. Pouget. Bayesian inference with probabilistic population codes.\n\nNature Neuroscience, 9:1432ff, November 2006.\n\n[16] Roitman J. D. Ditterich J. Mazurek, M. E. and M. N. Shadlen. A role for neural integrators in perceptual\n\ndecision-making. Cerebral Cortex, 13:1257\u20131269, 2003.\n\n[17] L. Festinger. Theory of Cognitive Dissonance. Stanford University Press, Stanford, CA, 1957.\n[18] J.W. Brehm. Post-decision changes in the desirability of alternatives. Journal of Abnormal and Social\n\nPsychology, 52(3):384ff., 1956.\n\n\f", "award": [], "sourceid": 1016, "authors": [{"given_name": "Alan", "family_name": "Stocker", "institution": null}, {"given_name": "Eero", "family_name": "Simoncelli", "institution": null}]}