{"title": "Individuation, Identification and Object Discovery", "book": "Advances in Neural Information Processing Systems", "page_first": 925, "page_last": 933, "abstract": "Humans are typically able to infer how many objects their environment contains and to recognize when the same object is encountered twice. We present a simple statistical model that helps to explain these abilities and evaluate it in three behavioral experiments. Our first experiment suggests that humans rely on prior knowledge when deciding whether an object token has been previously encountered. Our second and third experiments suggest that humans can infer how many objects they have seen and can learn about categories and their properties even when they are uncertain about which tokens are instances of the same object.", "full_text": "Object discovery and identi\ufb01cation\n\nCharles Kemp & Alan Jern\nDepartment of Psychology\nCarnegie Mellon University\n{ckemp,ajern}@cmu.edu\n\nDepartment of Psychology\n\nUniversity of California, Berkeley\n\nFei Xu\n\nfei xu@berkeley.edu\n\nAbstract\n\nHumans are typically able to infer how many objects their environment contains\nand to recognize when the same object is encountered twice. We present a sim-\nple statistical model that helps to explain these abilities and evaluate it in three\nbehavioral experiments. Our \ufb01rst experiment suggests that humans rely on prior\nknowledge when deciding whether an object token has been previously encoun-\ntered. Our second and third experiments suggest that humans can infer how many\nobjects they have seen and can learn about categories and their properties even\nwhen they are uncertain about which tokens are instances of the same object.\n\nFrom an early age, humans and other animals [1] appear to organize the \ufb02ux of experience into a\nseries of encounters with discrete and persisting objects. Consider, for example, a young child who\ngrows up in a home with two dogs. At a relatively early age the child will solve the problem of object\ndiscovery and will realize that her encounters with dogs correspond to views of two individuals rather\nthan one or three. The child will also solve the problem of identi\ufb01cation, and will be able to reliably\nidentify an individual (e.g. Fido) each time it is encountered.\nThis paper presents a Bayesian approach that helps to explain both object discovery and identi\ufb01ca-\ntion. Bayesian models are appealing in part because they help to explain how inferences are guided\nby prior knowledge. Imagine, for example, that you see some photographs taken by your friends\nAlice and Bob. The \ufb01rst shot shows Alice sitting next to a large statue and eating a sandwich,\nand the second is similar but features Bob rather than Alice. The statues in each photograph look\nidentical, and probably you will conclude that the two photographs are representations of the same\nstatue. The sandwiches in the photographs also look identical, but probably you will conclude that\nthe photographs show different sandwiches. The prior knowledge that contributes to these infer-\nences appears rather complex, but we will explore some much simpler cases where prior knowledge\nguides identi\ufb01cation.\nA second advantage of Bayesian models is that they help to explain how learners cope with un-\ncertainty. In some cases a learner may solve the problem of object discovery but should maintain\nuncertainty when faced with identi\ufb01cation problems. For example, I may be quite certain that I have\nmet eight different individuals at a dinner party, even if I am unable to distinguish between two\nguests who are identical twins. In other cases a learner may need to reason about several related\nproblems even if there is no de\ufb01nitive solution to any one of them. Consider, for example, a young\nchild who must simultaneously discover which objects her world contains (e.g. Mother, Father, Fido,\nand Rex) and organize them into categories (e.g. people and dogs). Many accounts of categorization\nseem to implicitly assume that the problem of identi\ufb01cation must be solved before categorization\ncan begin, but we will see that a probabilistic approach can address both problems simultaneously.\nIdenti\ufb01cation and object discovery have been discussed by researchers from several disciplines,\nincluding psychology [2, 3, 4, 5, 6], machine learning [7, 8], statistics [9], and philosophy [10].\nMany machine learning approaches can handle identity uncertainty, or uncertainty about whether\ntwo tokens correspond to the same object. Some approaches such such as BLOG [8] are able in\naddition to handle problems where the number of objects is not speci\ufb01ed in advance. We propose\n\n1\n\n\fthat some of these approaches can help to explain human learning, and this paper uses a simple\nBLOG-style approach [8] to account for human inferences.\nThere are several existing psychological models of identi\ufb01cation, and the work of Shepard [11],\nNosofsky [3] and colleagues is probably the most prominent. Models in this tradition usually focus\non problems where the set of objects is speci\ufb01ed in advance and where identity uncertainty arises\nas a result of perceptual noise. In contrast, we focus on problems where the number of objects\nmust be inferred and where identity uncertainty arises from partial observability rather than noise.\nA separate psychological tradition focuses on problems where the number of objects is not \ufb01xed in\nadvance. Developmental psychologists, for example, have used displays where only one object token\nis visible at any time to explore whether young infants can infer how many different objects have\nbeen observed in total [4]. Our work emphasizes some of the same themes as this developmental\nresearch, but we go beyond previous work in this area by presenting and evaluating a computational\napproach to object identi\ufb01cation and discovery.\nThe problem of deciding how many objects have been observed is sometimes called individua-\ntion [12] but here we treat individuation as a special case of object discovery. Note, however, that\nobject discovery can also refer to cases where learners infer the existence of objects that have never\nbeen observed. Unobserved-object discovery has received relatively little attention in the psycho-\nlogical literature, but is addressed by statistical models including including species-sampling mod-\nels [9] and capture-recapture models [13]. Simple statistical models of this kind will not address\nsome of the most compelling examples of unobserved-object discovery, such as the discovery of the\nplanet Neptune, or the ability to infer the existence of a hidden object by following another person\u2019s\ngaze [14]. We will show, however, that a simple statistical approach helps to explain how humans\ninfer the existence of objects that they have never seen.\n\n1 A probabilistic account of object discovery and identi\ufb01cation\nObject discovery and identi\ufb01cation may depend on many kinds of observations and may be sup-\nported by many kinds of prior knowledge. This paper considers a very simple setting where these\nproblems can be explored. Suppose that an agent is learning about a world that contains nw white\nballs and n \u2212 nw gray balls. Let f (oi) indicate the color of ball oi, where each ball is white\n(f (oi) = 1) or gray (f (oi) = 0). An agent learns about the world by observing a sequence of object\ntokens. Suppose that label l(j) is a unique identi\ufb01er of token j\u2014in other words, suppose that the\njth token is a token of object ol(j). Suppose also that the jth token is observed to have feature value\ng(j). Note the difference between f and g: f is a vector that speci\ufb01es the color of the n balls in the\nworld, and g is a vector that speci\ufb01es the color of the object tokens observed thus far.\nWe de\ufb01ne a probability distribution over token sequences by assuming that a world is sampled from\na prior P (n, nw) and that tokens are sampled from this world. The full generative model is:\n\nif n \u2264 1000\notherwise\n\n(1)\n\nn\n0\n\nP (n) \u221d (cid:26) 1\nnw | n \u223c Uniform(0, n)\nl(j) | n \u223c Uniform(1, n)\ng(j) = f (ol(j))\n\n(2)\n(3)\n(4)\nA prior often used for inferences about a population of unknown size is the scale-invariant Jeffreys\nprior P (n) = 1\nn [15]. We follow this standard approach here but truncate at n = 1000. Choosing\nsome upper bound is convenient when implementing the model, and has the advantage of producing\na prior that is proper (note that the Jeffreys prior is improper). Equation 2 indicates that the number\nof white balls nw is sampled from a discrete uniform distribution. Equation 3 indicates that each\ntoken is generated by sampling one of the n balls in the world uniformly at random, and Equation 4\nindicates that the color of each token is observed without noise.\nThe generative assumptions just described can be used to de\ufb01ne a probabilistic approach to ob-\nject discovery and identi\ufb01cation. Suppose that the observations available to a learner consist of a\nfully-observed feature vector g and a partially-observed label vector lobs. Object discovery and iden-\nti\ufb01cation can be addressed by using the posterior distribution P (l|g, lobs) to make inferences about\nthe number of distinct objects observed and about the identity of each token. Computing the poste-\nrior distribution P (n|g, lobs) allows the learner to make inferences about the total number of objects\n\n2\n\n\fin the world. In some cases, the learner may solve the problem of unobserved-object discovery by\nrealizing that the world contains more objects than she has observed thus far.\nThe next sections explore the idea that the inferences made by humans correspond approximately\nto the inferences of this ideal learner. Since the ideal learner allows for the possible existence of\nobjects that have not yet been observed, we refer to our model as the open world model. Although\nwe make no claim about the psychological mechanisms that might allow humans to approximate\nthe predictions of the ideal learner, in practice we need some method for computing the predictions\nof our model. Since the domains we consider are relatively small, all results in this paper were\ncomputed by enumerating and summing over the complete set of possible worlds.\n\n2 Experiment 1: Prior knowledge and identi\ufb01cation\n\nThe introduction described a scenario (the statue and sandwiches example) where prior knowledge\nappears to guide identi\ufb01cation. Our \ufb01rst experiment explores a very simple instance of this idea. We\nconsider a setting where participants observe balls that are sampled with replacement from an urn.\nIn one condition, participants sample the same ball from the urn on four consecutive occasions and\nare asked to predict whether the token observed on the \ufb01fth draw is the same ball that they saw on\nthe \ufb01rst draw. In a second condition participants are asked exactly the same question about the \ufb01fth\ntoken but sample four different balls on the \ufb01rst four draws. We expect that these different patterns\nof data will shape the prior beliefs that participants bring to the identi\ufb01cation problem involving the\n\ufb01fth token, and that participants in the \ufb01rst condition will be substantially more likely to identify the\n\ufb01fth token as a ball that they have seen before.\nAlthough we consider an abstract setting involving balls and urns the problem we explore has some\nreal-world counterparts. Suppose, for example, that a colleague wears the same tie to four formal\ndinners. Based on this evidence you might be able to estimate the total number of ties that he owns,\nand might guess that he is less likely to wear a new tie to the next dinner than a colleague who wore\ndifferent ties to the \ufb01rst four dinners.\nMethod. 12 adults participated for course credit. Participants interacted with a computer interface\nthat displayed an urn, a robotic arm and a beam of UV light. The arm randomly sampled balls from\nthe urn, and participants were told that each ball had a unique serial number that was visible only\nunder UV light. After some balls were sampled, the robotic arm moved them under the UV light and\nrevealed their serial numbers before returning them to the urn. Other balls were returned directly to\nthe urn without having their serial numbers revealed. The serial numbers were alphanumeric strings\nsuch as \u201cQXR182\u201d\u2014note that these serial numbers provide no information about the total number\nof objects, and that our setting is therefore different from the Jeffreys tramcar problem [15].\nThe experiment included \ufb01ve within-participant conditions shown in Figure 1. The observations for\neach condition can be summarized by a string that indicates the number of tokens and the serial\nnumbers of some but perhaps not all tokens. The\ncondition in Figure 1a is a case\nwhere the same ball (without loss of generality, we call it ball 1) is drawn from the urn on \ufb01ve\ncondition in Figure 1b is a case where \ufb01ve different balls\nconsecutive occasions. The\nare drawn from the urn. The\ncondition in Figure 1d is a case where \ufb01ve draws are\nmade, but only the serial number of the \ufb01rst ball is revealed. Within any of the \ufb01ve conditions,\nall of the balls had the same color (white or gray), but different colors were used across different\nconditions. For simplicity, all draws in Figure 1 are shown as white balls.\nOn the second and all subsequent draws, participants were asked two questions about any token that\nwas subsequently identi\ufb01ed. They \ufb01rst indicated whether the token was likely to be the same as the\nball they observed on the \ufb01rst draw (the ball labeled 1 in Figure 1). They then indicated whether\nthe token was likely to be a ball that they had never seen before. Both responses were provided on a\nscale from 1 (very unlikely) to 7 (very likely). At the end of each condition, participants were asked\nto estimate the total number of balls in the urn. Twelve options were provided ranging from \u201cexactly\n1\u201d to \u201cexactly 12,\u201d and a thirteenth option was labeled \u201cmore than 12.\u201d Responses to each option\nwere again provided on a seven point scale.\nModel predictions and results. The comparisons of primary interest involve the identi\ufb01cation\nquestions in conditions 1a and 1b. In condition 1a the open world model infers that the total number\nof balls is probably low, and becomes increasingly con\ufb01dent that each new token is the same as the\n\n1 2 3 4\n \n\n1 \n\n1 1 1 1\n\n1\n\n5\n \n\n \n\n3\n\n\f11111\n\n1\n?\nBALL (1)\n\n=\n\n= NEW\n?\nNEW\n\na)\n\nn\na\nm\nu\nH\n\nd\nl\nr\no\nw\n \nn\ne\np\nO\n\ni\n\ne\nr\nu\nt\nx\nm\nP\nD\n\n \n\ni\n\ne\nr\nu\nt\nx\nm\nY\nP\n\n \n\n7\n5\n3\n1\n1\n0.66\n0.33\n0\n\n1\n0.66\n0.33\n0\n\n1\n0.66\n0.33\n0\n\nc)\n\nn\na\nm\nu\nH\n\n \n\nd\nl\nr\no\nw\nn\ne\np\nO\n\n7\n5\n3\n1\n13\n9\n5\n1\n\n13579\n\n# Balls\n\n1\n1\n\n2\n1\n+\n\nd)\n\n1\n\nb)\n\n7\n5\n3\n1\n1\n0.66\n0.33\n0\n1\n0.66\n0.33\n0\n\n1\n0.66\n0.33\n0\n\n=\n1\n?\nBALL (1)\nBALL (1)\n\n)\n)\n)\n)\n??\n??\n?\n?\n?\n?\n(\n(\n(\n(\n1234\n)\n)\n)\n)\n2\n1\n4\n3\n123\n(\n(\n(\n(\n)\n)\n)\n1\n3\n2\n1\n2\n(\n(\n(\n)\n)\n2\n1\n1\n(\n(\n)\n1\n(\n\n2 3 4 5\n\n1\n= \n?\nNEW\nNEW\nNEW\n\n7\n5\n3\n1\n13\n9\n5\n1\n\n13579\n\n# Balls\n\n1\n1\n\n2\n1\n+\n\n)\n?\n(\n)\n3\n(\n)\n2\n1\n(\n)\n1\n(\n\n)\n)\n)\n????\n?\n?\n?\n(\n(\n(\n1234\n)\n)\n)\n2\n1\n4\n123\n(\n(\n(\n)\n)\n1\n3\n2\n(\n(\n)\n2\n1\n(\n)\n1\n(\n\ne)\n\n7\n5\n3\n1\n13\n9\n5\n1\n\n13579\n\n7\n5\n3\n1\n13\n9\n5\n1\n\n13579\n\n)\n?\n(\n)\n1\n(\n\n)\n?\n(\n1\n)\n1\n1\n(\n)\n1\n1\n(\n)\n1\n(\n\n)\n)\n)\n)\n)\n????????\n?\n?\n?\n?\n?\n(\n(\n(\n(\n(\n1 1111\n11\n)\n)\n)\n)\n)\n1\n1\n1\n1\n1\n111\n1\n1\n(\n(\n(\n(\n(\n)\n)\n)\n)\n1\n1\n1\n1\n1\n1\n(\n(\n(\n(\n)\n)\n1\n1\n1\n1\n(\n(\n)\n)\n1\n1\n(\n(\n\n)\n?\n(\n)\n1\n(\n)\n1\n1\n(\n)\n1\n(\n\n1\n\n7\n5\n3\n1\n13\n9\n5\n1\n\n13579\n\n1\n1\n\n2\n1\n+\n\n# Balls\n\n1\n1\n\n2\n1\n+\n\n# Balls\n\n1\n1\n\n2\n1\n+\n\n# Balls\n\nFigure 1: Model predictions and results for the \ufb01ve conditions in experiment 1. The left columns\nin (a) and (b) show inferences about the identi\ufb01cation questions. In each plot, the \ufb01rst group of\nbars shows predictions about the probability that each new token is the same ball as the \ufb01rst ball\ndrawn from the urn. The second group of bars shows the probability that each new token is a ball\nthat has never been seen before. The right columns in (a) and (b) and the plots in (c) through (e)\nshow inferences about the total number of balls in each urn. All human responses are shown on\nthe 1-7 scale used for the experiment. Model predictions are shown as probabilities (identi\ufb01cation\nquestions) or ranks (population size questions).\n\n\ufb01rst object observed. In condition 1b the model infers that the number of balls is probably high, and\nbecomes increasingly con\ufb01dent that each new token is probably a new ball.\nThe rightmost charts in Figures 1a and 1b show inferences about the total number of balls and\ncon\ufb01rm that humans expect the number of balls to be low in condition 1a and high in condition 1b.\nNote that participants in condition 1b have solved the problem of unobserved-object discovery and\ninferred the existence of objects that they have never seen. The leftmost charts in 1a and 1b show\nresponses to the identi\ufb01cation questions, and the \ufb01nal bar in each group of four shows predictions\nabout the \ufb01fth token sampled. As predicted by the model, participants in 1a become increasingly\ncon\ufb01dent that each new token is the same object as the \ufb01rst token, but participants in 1b become\nincreasingly con\ufb01dent that each new token is a new object. The increase in responses to the new ball\nquestions in Figure 1b is replicated in conditions 2d and 2e of Experiment 2, and therefore appears\nto be reliable.\n\n4\n\n\f\u03b8\n\nThe third and fourth rows of Figures 1a and 1b show the predictions of two alternative models that\nare intuitively appealing but that fail to account for our results. The \ufb01rst is the Dirichlet Process (DP)\nmixture model, which was proposed by Anderson [16] as an account of human categorization. Un-\nlike most psychological models of categorization, the DP mixture model reserves some probability\nmass for outcomes that have not yet been observed. The model incorporates a prior distribution over\npartitions\u2014in most applications of the model these partitions organize objects into categories, but\nAnderson suggests that the model can also be used to organize object tokens into classes that corre-\nspond to individual objects. The DP mixture model successfully predicts that the ball 1 questions\nwill receive higher ratings in 1a than 1b, but predicts that responses to the new ball question will\nbe identical across these two conditions. According to this model, the probability that a new token\ncorresponds to a new object is\nm+\u03b8 where \u03b8 is a hyperparameter and m is the number of tokens\nobserved thus far. Note that this probability is the same regardless of the identities of the m tokens\npreviously observed.\nThe Pitman Yor (PY) mixture model in the fourth row is a generalization of the DP mixture model\nthat uses a prior over partitions de\ufb01ned by two hyperparameters [17]. According to this model, the\nprobability that a new token corresponds to a new object is \u03b8+k\u03b1\nm+\u03b8 , where \u03b8 and \u03b1 are hyperparameters\nand k is the number of distinct objects observed so far. The \ufb02exibility offered by a second hyper-\nparameter allows the model to predict a difference in responses to the new ball questions across the\ntwo conditions, but the model does not account for the increasing pattern observed in condition 1b.\nMost settings of \u03b8 and \u03b1 predict that the responses to the new ball questions will decrease in condi-\ntion 1b. A non-generic setting of these hyperparameters with \u03b8 = 0 can generate the \ufb02at predictions\nin Figure 1, but no setting of the hyperparameters predicts the increase in the human responses.\nAlthough the PY and DP models both make predictions about the identi\ufb01cation questions, neither\nmodel can predict the total number of balls in the urn. Both models assume that the population of\nballs is countably in\ufb01nite, which does not seem appropriate for the tasks we consider.\nFigures 1c through 1d show results for three control conditions. Like condition 1a, 1c and 1d are\ncases where exactly one serial number is observed. Like conditions 1a and 1b, 1d and 1e are cases\nwhere exactly \ufb01ve tokens are observed. None of these control conditions produces results similar to\nconditions 1a and 1b, suggesting that methods which simply count the number of tokens or serial\nnumbers will not account for our results.\nIn each of the \ufb01nal three conditions our model predicts that the posterior distribution on the number\nof balls n should decay as n increases. This prediction is not consistent with our data, since most\nparticipants assigned equal ratings to all 13 options, including \u201cexactly 12 balls\u201d and \u201cmore than\n12 balls.\u201d The \ufb02at responses in Figures 1c through 1e appear to indicate a generic desire to express\nuncertainty, and suggest that our ideal learner model accounts for human responses only after several\ninformative observations have been made.\n\n3 Experiment 2: Object discovery and identity uncertainty\nOur second experiment focuses on object discovery rather than identi\ufb01cation. We consider cases\nwhere learners make inferences about the number of objects they have seen and the total number\nof objects in the urn even though there is substantial uncertainty about the identities of many of the\ntokens observed. Our probabilistic model predicts that observations of unidenti\ufb01ed tokens can in\ufb02u-\nence inferences about the total number of objects, and our second experiment tests this prediction.\nMethod. 12 adults participated for course credit. The same participants took part in Experiments\n1 and 2, and Experiment 2 was always completed after Experiment 1. Participants interacted with\nthe same computer interface in both conditions, and the seven conditions in Experiment 2 are shown\nin Figure 2. Note that each condition now includes one or more gray tokens. In 2a, for example,\nthere are four gray tokens and none of these tokens is identi\ufb01ed. All tokens were sampled with\nreplacement, and the condition labels in Figure 2 summarize the complete set of tokens presented in\neach condition. Within each condition the tokens were presented in a pseudo-random order\u2014in 2a,\nfor example, the gray and white tokens were interspersed with each other.\nModel predictions and results. The cases of most interest are the inferences about the total number\nof balls in conditions 2a and 2c. In both conditions participants observe exactly four white tokens\nand all four tokens are revealed to be the same ball. The gray tokens in each condition are never\nidenti\ufb01ed, but the number of these tokens varies across the conditions. Even though the identities\n\n5\n\n\fa)\n\n7\n5\n3\n1\n1\n0.66\n0.33\n0\n\nn\na\nm\nu\nH\n\n \n\nd\nl\nr\no\nw\nn\ne\np\nO\n\nc)\n\n7\n5\n3\n1\n1\n0.66\n0.33\n0\n\nn\na\nm\nu\nH\n\n \n\nd\nl\nr\no\nw\nn\ne\np\nO\n\ne)\n\n7\n5\n3\n1\n1\n0.66\n0.33\n0\n\nn\na\nm\nu\nH\n\n \n\nd\nl\nr\no\nw\nn\ne\np\nO\n\n111\n\n1\n\n13579\n\n# Balls\n\n1\n1\n\n2\n1\n+\n\nb)\n\n7\n5\n3\n1\n1\n0.66\n0.33\n0\n\n1\n?\nBALL (1)\n\n=\n\nNEW= \n?\nNEW\n\n)\n?\n?\n(\n)\n1\n1\n(\n3\n \n3\nx\nx\n]\n \n[\n\n)\n?\n?\n(\n)\n1\n1\n(\n3\n \n3\nx\nx\n]\n \n[\n\n11111111\n\n7\n5\n3\n1\n13\n9\n5\n1\n\n13579\n\n# Balls\n\n1\n1\n\n2\n1\n+\n\n1 1 1 1\n\nd)\n\n1 2 3 4\n\n7\n5\n3\n1\n1\n0.66\n0.33\n0\n\nf)\n\n7\n5\n3\n1\n13\n9\n5\n1\n\n13579\n\n# Balls\n\n1\n1\n\n2\n1\n+\n\n1\n\n2 3 4\n\n13579\n\n# Balls\n\n1\n1\n\n2\n1\n+\n\n?\n1\nBALL (1)\n\n=\n\nNEW= \n?\nNEW\n\n)\n?\n?\n(\n1\n)\n1\n(\n1\nx\n\n)\n?\n?\n(\n2\n)\n2\n(\n1\n)\n1\n1\n(\n \nx\n1\nx\n]\n \n[\n\n)\n?\n?\n(\n3\n)\n3\n(\n2\n)\n2\n1\n(\n)\n1\n1\n(\nx\n \n1\nx\n]\n \n[\n\n)\n?\n?\n(\n1\n)\n1\n(\n1\nx\n\n)\n?\n?\n(\n2\n)\n2\n(\n1\n)\n1\n1\n(\n \nx\n1\nx\n]\n \n[\n\n)\n?\n?\n(\n3\n)\n3\n(\n2\n)\n2\n1\n(\n)\n1\n1\n(\nx\n \n1\nx\n]\n \n[\n\n7\n5\n3\n1\n13\n9\n5\n1\n\n13579\n\n# Balls\n\n1\n1\n\n2\n1\n+\n\n1\n\ng)\n\n1\n\n7\n5\n3\n1\n13\n9\n5\n1\n\n13579\n\n# Balls\n\n1\n1\n\n2\n1\n+\n\n13579\n\n# Balls\n\n1\n1\n\n2\n1\n+\n\n7\n5\n3\n1\n13\n9\n5\n1\n\n7\n5\n3\n1\n13\n9\n5\n1\n\n1\n?\nBALL (1)\n\n=\n\nNEW= \n?\nNEW\n\n)\n)\n)\n???\n?\n?\n?\n(\n(\n(\n11\n)\n)\n)\n1\n1\n1\n1\n(\n(\n(\n1\n1\n2\n \n)\n)\n2\n1\n1\nx\n1\nx\n3\n(\n(\n)\n \nx\n]\n3\n1\n \n3\n[\nx\n(\nx\n \n]\n3\n \n[\nx\n]\n \n[\n\n)\n?\n(\n1\n)\n1\n(\n1\n)\n1\n3\n(\n \nx\n3\nx\n]\n \n[\n\n)\n)\n???\n?\n?\n(\n(\n)\n1\n)\n1\n1\n1\n(\n(\n1\n2\n \n)\n2\n1\nx\n1\nx\n(\n)\n]\n1\n \n3\n[\n(\nx\n \n3\nx\n]\n \n[\n\n?\n1\nBALL (1)\n\n=\n\nNEW= \n?\nNEW\n\n)\n)\n)\n???\n?\n?\n?\n(\n(\n(\n11\n1\n)\n)\n)\n1\n1\n1\n(\n(\n(\n1\n1\n3\n \n)\n)\nx\n3\n1\n1\n1\n6\nx\n(\n(\n)\n \nx\n]\n6\n1\n \n9\n[\nx\n(\nx\n \n]\n9\n \n[\nx\n]\n \n[\n\n)\n)\n)\n???\n?\n?\n?\n(\n(\n(\n11\n1\n)\n)\n)\n1\n1\n1\n(\n(\n(\n1\n1\n3\n \n)\n)\nx\n3\n1\n1\n1\n6\nx\n(\n(\n)\n \nx\n]\n6\n1\n \n9\n[\nx\n(\nx\n \n]\n9\n \n[\nx\n]\n \n[\n\n1\n?\nBALL (1)\n\n=\n\nNEW= \n?\nNEW\n\n7\n5\n3\n1\n13\n9\n5\n) 1\n\n)\n?\n?\n(\n1\n)\n1\n(\n1\n \nx\n1\nx\n]\n \n[\n\n)\n?\n?\n(\n2\n)\n2\n1\n(\n)\n1\n1\n(\nx\n \n1\nx\n]\n \n[\n\n)\n?\n?\n(\n3\n)\n3\n2\n(\n)\n2\n1\n(\n)\n1\n3\n(\nx\n \n3\nx\n]\n \n[\n\n)\n?\n?\n(\n1\n)\n1\n(\n1\n \nx\n1\nx\n]\n \n[\n\n)\n?\n?\n(\n2\n)\n2\n1\n(\n)\n1\n1\n(\nx\n \n1\nx\n]\n \n[\n\n?\n?\n(\n3\n)\n3\n2\n(\n)\n2\n1\n(\n)\n1\n3\n(\nx\n \n3\nx\n]\n \n[\n\nFigure 2: Model predictions and results for the seven conditions in Experiment 2. The left columns\nin (a) through (e) show inferences about the identi\ufb01cation questions, and the remaining plots show\ninferences about the total number of balls in each urn.\n\nof the gray tokens are never revealed, the open world model can use these observations to guide its\ninference about the total number of balls. In 2a, the proportions of white tokens and gray tokens\nare equal and there appears to be only one white ball, suggesting that the total number of balls is\naround two. In 2c grey tokens are now three times more common, suggesting that the total number\nof balls is larger than two. As predicted, the human responses in Figure 2 show that the peak of the\ndistribution in 2a shifts to the right in 2c. Note, however, that the model does not accurately predict\nthe precise location of the peak in 2c.\nSome of the remaining conditions in Figure 2 serve as controls for the comparison between 2a and\n2c. Conditions 2a and 2c differ in the total number of tokens observed, but condition 2b shows that\n\n6\n\n\fthis difference is not the critical factor. The number of tokens observed is the same across 2b and 2c,\nyet the inference in 2b is more similar to the inference in 2a than in 2c. Conditions 2a and 2c also\ndiffer in the proportion of white tokens observed, but conditions 2f and 2g show that this difference\nis not suf\ufb01cient to explain our results. The proportion of white tokens observed is the same across\nconditions 2a, 2f, and 2g, yet only 2a provides strong evidence that the total number of balls is\nlow. The human inferences for 2f and 2g show the hint of an alternating pattern consistent with the\ninference that the total number of balls in the urn is even. Only 2 out of 12 participants generated\nthis pattern, however, and the majority of responses are near uniform. Finally, conditions 2d and\n2e replicate our \ufb01nding from Experiment 1 that the identity labels play an important role. The only\ndifference between 2a and 2e is that the four labels are distinct in the latter case, and this single\ndifference produces a predictable divergence in human inferences about the total number of balls.\n\n4 Experiment 3: Categorization and identity uncertainty\nExperiment 2 suggested that people make robust inferences about the existence and number of unob-\nserved objects in the presence of identity uncertainty. Our \ufb01nal experiment explores categorization\nin the presence of identity uncertainty. We consider an extreme case where participants make infer-\nences about the variability of a category even though the tokens of that category have never been\nidenti\ufb01ed.\nMethod. The experiment included two between subject conditions, and 20 adults were recruited for\neach condition. Participants were asked to reason about a category including eggs of a given species,\nwhere eggs in the same category might vary in size. The interface used in Experiments 1 and 2 was\nadapted so that the urn now contained two kinds of objects: notepads and eggs. Participants were\ntold that each notepad had a unique color and a unique label written on the front. The UV light\nplayed no role in the experiment and was removed from the interface: notepads could be identi\ufb01ed\nby visual inspection, and identifying labels for the eggs were never shown.\nIn both conditions participants observed a sequence of 16 tokens sampled from the urn. Half of the\ntokens were notepads and the others were eggs, and all egg tokens were identical in size. Whenever\nan egg was sampled, participants were told that this egg was a Kwiba egg. At the end of the con-\ndition, participants were shown a set of 11 eggs that varied in size and asked to rate the probability\nthat each one was a Kwiba egg. Participants then made inferences about the total number of eggs\nand the total number of notepads in the urn.\nThe two conditions were intended to lead to different inferences about the total number of eggs in\nthe urn. In the 4 egg condition, all items (notepad and eggs) were sampled with replacement. The\n8 notepad tokens included two tokens of each of 4 notepads, suggesting that the total number of\nnotepads was 4. Since the proportion of egg tokens and notepad tokens was equal, we expected\nparticipants to infer that the total number of eggs was roughly four. In the 1 egg condition, four\nnotepads were observed in total, but the \ufb01rst three were sampled without replacement and never\nreturned to the urn. The \ufb01nal notepad and the egg tokens were always sampled with replacement.\nAfter the \ufb01rst three notepads had been removed from the urn, the remaining notepad was sampled\nabout half of the time. We therefore expected participants to infer that the urn probably contained\na single notepad and a single egg by the end of the experiment, and that all of the eggs they had\nobserved were tokens of a single object.\nModel. We can simultaneously address identi\ufb01cation and categorization by combining the open\nworld model with a Gaussian model of categorization. Suppose that the members of a given category\n(e.g. Kwiba eggs) vary along a single continuous dimension (e.g. size). We assume that the egg\nsizes are distributed according to a Gaussian with known mean and unknown variance \u03c32. For\nconvenience, we assume that the mean is zero (i.e. we measure size with respect to the average) and\nuse the standard inverse-gamma prior on the variance: p(\u03c32) \u221d (\u03c32)\u2212(\u03b1+1)e\u2212\n\u03c32 . Since we are\ninterested only in qualitative predictions of the model, the precise values of the hyperparameters are\nnot very important. To generate the results shown in Figure 3 we set \u03b1 = 0.5 and \u03b2 = 2.\nBefore observing any eggs, the marginal distribution on sizes is p(x) = R p(x|\u03c32)p(\u03c32)d\u03c32. Sup-\npose now that we observe m random samples from the category and that each one has size zero.\nIf m is large then these observations provide strong evidence that the variance \u03c32 is small, and the\nposterior distribution p(x|m) will be tightly peaked around zero. If m, is small, however, then the\nposterior distribution will be broader.\n\n\u03b2\n\n7\n\n\fa)\n\nCategory pdf (4 eggs)\n\n)\nx\n(\n4\np\n\n2\n\n1\n\n0\n\nb)\n\nCategory pdf (1 egg)\n\n)\nx\n(\n1\np\n\u2212\n\n)\nx\n(\n4\np\n\n\u2212\n\n)\nx\n(\n1\np\n\n2\n\n1\n\n0\n\n\u22122\n\n0\n\n2\n\nx (size)\n\n\u22122\n\n0\n\n2\n\nx (size)\n\nNumber of eggs (4 eggs)\n7\n5\n3\n1\n\n2 4 6 8\n\n0\n1\n\n2\n1\n\nNumber of eggs (1 egg)\n7\n5\n3\n1\n\n2 4 6 8\n\n0\n1\n\n2\n1\n\nModel differences\n\n\u22122\n\n0\n\n2\n\nx (size)\n\nHuman differences\n\n=\n0.1\n\n0\n\n\u22120.1\n\nc)\n0.4\n0.2\n0\n\u22120.2\n\u22120.4\n\n\u22124 \u22122\n\n0\n\n2\n\n4\n\n(size)\n\nFigure 3: (a) Model predictions for Experiment 3. The \ufb01rst two panels show the size distributions\ninferred for the two conditions, and the \ufb01nal panel shows the difference of these distributions. The\ndifference curve for the model rises to a peak of around 1.6 but has been truncated at 0.1.\n(b)\nHuman inferences about the total number of eggs in the urn. As predicted, participants in the 4\negg condition believe that the urn contains more eggs. (c) The difference of the size distributions\ngenerated by participants in each condition. The central peak is absent but otherwise the curve is\nqualitatively similar to the model prediction.\n\nThe categorization model described so far is entirely standard, but note that our experiment considers\na case where T , the observed stream of object tokens, is not suf\ufb01cient to determine m, the number of\ndistinct objects observed. We therefore use the open world model to generate a posterior distribution\nover m, and compute a marginal distribution over size by integrating out both m and \u03c32: p(x|T ) =\nR p(x|\u03c32)p(\u03c32|m)p(m|T )d\u03c32dm. Figure 3a shows predictions of this \u201copen world + Gaussian\u201d\nmodel for the two conditions in our experiment. Note that the difference between the curves for the\ntwo conditions has the characteristic Mexican-hat shape produced by a difference of Gaussians.\nResults.\nInferences about the total number of eggs suggested that our manipulation succeeded.\nFigure 3b indicates that participants in the 4 egg condition believed that they had seen more eggs\nthan participants in the 1 egg condition. Participants in both conditions generated a size distribution\nfor the category of Kwiba eggs, and the difference of these distributions is shown in Figure 3c.\nAlthough the magnitude of the differences is small, the shape of the difference curve is consistent\nwith the model predictions. The x = 0 bar is the only case that diverges from the expected Mexican\nhat shape, and this result is probably due to a ceiling effect\u201480% of participants in both conditions\nchose the maximum possible rating for the egg with mean size (size zero), leaving little opportunity\nfor a difference between conditions to emerge. To support the qualitative result in Figure 3c we\ncomputed the variance of the curve generated by each individual participant and tested the hypothesis\nthat the variances were greater in the 1 egg condition than in the 4 egg condition. A Mann-Whitney\ntest indicated that this difference was marginally signi\ufb01cant (p < 0.1, one-sided).\n\n5 Conclusion\nParsing the world into stable and recurring objects is arguably our most basic cognitive achieve-\nment [2, 10]. This paper described a simple model of object discovery and identi\ufb01cation and eval-\nuated it in three behavioral experiments. Our \ufb01rst experiment con\ufb01rmed that people rely on prior\nknowledge when solving identi\ufb01cation problems. Our second and third experiments explored prob-\nlems where the identities of many object tokens were never revealed. Despite the resulting uncer-\ntainty, we found that participants in these experiments were able to track the number of objects they\nhad seen, to infer the existence of unobserved objects, and to learn and reason about categories.\nAlthough the tasks in our experiments were all relatively simple, future work can apply our ap-\nproach to more realistic settings. For example, a straightforward extension of our model can handle\nproblems where objects vary along multiple perceptual dimensions and where observations are cor-\nrupted by perceptual noise. Discovery and identi\ufb01cation problems may take several different forms,\nbut probabilistic inference can help to explain how all of these problems are solved.\nAcknowledgments We thank Bobby Han, Faye Han and Maureen Satyshur for running the experiments.\n\n8\n\n\fReferences\n[1] E. A. Tibbetts and J. Dale. Individual recognition: it is good to be different. Trends in Ecology\n\nand Evolution, 22(10):529\u2013237, 2007.\n\n[2] W. James. Principles of psychology. Holt, New York, 1890.\n[3] R. M. Nosofsky. Attention, similarity, and the identi\ufb01cation-categorization relationship. Jour-\n\nnal of Experimental Psychology: General, 115:39\u201357, 1986.\n\n[4] F. Xu and S. Carey. Infants\u2019 metaphysics: the case of numerical identity. Cognitive Psychology,\n\n30:111\u2013153, 1996.\n\n[5] L. W. Barsalou, J. Huttenlocher, and K. Lamberts. Basing categorization on individuals and\n\nevents. Cognitive Psychology, 36:203\u2013272, 1998.\n\n[6] L. J. Rips, S. Blok, and G. Newman. Tracing the identity of objects. Psychological Review,\n\n113(1):1\u201330, 2006.\n\n[7] A. McCallum and B. Wellner. Conditional models of identity uncertainty with application\nIn L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural\n\nto noun coreference.\nInformation Processing Systems 17, pages 905\u2013912. MIT Press, Cambridge, MA, 2005.\n\n[8] B. Milch, B. Marthi, S. Russell, D. Sontag, D. L. Ong, and A. Kolobov. BLOG: Probabilistic\nmodels with unknown objects. In Proceedings of the 19th International Joint Conference on\nArti\ufb01cial Intelligence, pages 1352\u20131359, 2005.\n\n[9] J. Bunge and M. Fitzpatrick. Estimating the number of species: a review. Journal of the\n\nAmerican Statistical Association, 88(421):364\u2013373, 1993.\n\n[10] R. G. Millikan. On clear and confused ideas: an essay about substance concepts. Cambridge\n\nUniversity Press, New York, 2000.\n\n[11] R. N. Shepard. Stimulus and response generalization: a stochastic model relating generaliza-\n\ntion to distance in psychological space. Psychometrika, 22:325\u2013345, 1957.\n\n[12] A. M. Leslie, F. Xu, P. D. Tremoulet, and B. J. Scholl.\n\nIndexing and the object concept:\n\ndeveloping \u2018what\u2019 and \u2018where\u2019 systems. Trends in Cognitive Science, 2(1):10\u201318, 1998.\n\n[13] J. D. Nichols. Capture-recapture models. Bioscience, 42(2):94\u2013102, 1992.\n[14] G. Csibra and A. Volein. Infants can infer the presence of hidden objects from referential gaze\n\ninformation. British Journal of Developmental Psychology, 26:1\u201311, 2008.\n[15] H. Jeffreys. Theory of Probability. Oxford University Press, Oxford, 1961.\n[16] J. R. Anderson. The adaptive nature of human categorization. Psychological Review, 98(3):\n\n409\u2013429, 1991.\n\n[17] J. Pitman. Combinatorial stochastic processes, 2002. Notes for Saint Flour Summer School.\n\n9\n\n\f", "award": [], "sourceid": 176, "authors": [{"given_name": "Charles", "family_name": "Kemp", "institution": null}, {"given_name": "Alan", "family_name": "Jern", "institution": null}, {"given_name": "Fei", "family_name": "Xu", "institution": null}]}