{"title": "Particle Filtering for Nonparametric Bayesian Matrix Factorization", "book": "Advances in Neural Information Processing Systems", "page_first": 1513, "page_last": 1520, "abstract": null, "full_text": "Particle Filtering for Nonparametric Bayesian Matrix Factorization\n\nFrank Wood Department of Computer Science Brown University Providence, RI 02912 fwood@cs.brown.edu\n\nThomas L. Griffiths Department of Psychology University of California, Berkeley Berkeley, CA 94720 tom griffiths@berkeley.edu\n\nAbstract\nMany unsupervised learning problems can be expressed as a form of matrix factorization, reconstructing an observed data matrix as the product of two matrices of latent variables. A standard challenge in solving these problems is determining the dimensionality of the latent matrices. Nonparametric Bayesian matrix factorization is one way of dealing with this challenge, yielding a posterior distribution over possible factorizations of unbounded dimensionality. A drawback to this approach is that posterior estimation is typically done using Gibbs sampling, which can be slow for large problems and when conjugate priors cannot be used. As an alternative, we present a particle filter for posterior estimation in nonparametric Bayesian matrix factorization models. We illustrate this approach with two matrix factorization models and show favorable performance relative to Gibbs sampling.\n\n1 Introduction\nOne of the goals of unsupervised learning is to discover the latent structure expressed in observed data. The nature of the learning problem will vary depending on the form of the data and the kind of latent structure it expresses, but many unsupervised learning problems can be viewed as a form of matrix factorization i.e. decomposing an observed data matrix, X, into the product of two or more matrices of latent variables. If X is an N D matrix, where N is the number of D-dimensional observations, the goal is to find a low-dimensional latent feature space capturing the variation in the observations making up X. This can be done by assuming that X ZY, where Z is a N K matrix indicating which of (and perhaps the extent to which) K latent features are expressed in each of the N observations and Y is a K D matrix indicating how those K latent features are manifest in the D dimensional observation space. Typically, K is less than D, meaning that Z and Y provide an efficient summary of the structure of X. A standard problem for unsupervised learning algorithms based on matrix factorization is determining the dimensionality of the latent matrices, K . Nonparametric Bayesian statistics offers a way to address this problem: instead of specifying K a priori and searching for a \"best\" factorization, nonparametric Bayesian matrix factorization approaches such as those in [1] and [2] estimate a posterior distribution over factorizations with unbounded dimensionality (i.e. letting K ). This remains computationally tractable because each model uses a prior that ensures that Z is sparse, based on the Indian Buffet Process (IBP) [1]. The search for the dimensionality of the latent feature matrices thus becomes a problem of posterior inference over the number of non-empty columns in Z. Previous work on nonparametric Bayesian matrix factorization has used Gibbs sampling for posterior estimation [1, 2]. Indeed, Gibbs sampling is the standard inference algorithm used in nonparametric Bayesian methods, most of which are based on the Dirichlet process [3, 4]. However, recent\n\n\f\nwork has suggested that sequential Monte Carlo methods such as particle filtering can provide an efficient alternative to Gibbs sampling in Dirichlet process mixture models [5, 6]. In this paper we develop a novel particle filtering algorithm for posterior estimation in matrix factorization models that use the IBP, and illustrate its applicability to two specific models one with a conjugate prior, and the other without a conjugate prior but tractable in other ways. Our particle filtering algorithm is by nature an \"on-line\" procedure, where each row of X is processed only once, in sequence. This stands in comparison to Gibbs sampling, which must revisit each row many times to converge to a reasonable representation of the posterior distribution. We present simulation results showing that our particle filtering algorithm can be significantly more efficient than Gibbs sampling for each of the two models, and discuss its applicability to the broad class of nonparametric matrix factorization models based on the IBP.\n\n2 Nonparametric Bayesian Matrix Factorization\nLet X be an observed N D matrix. Our goal is to find a representation of the structure expressed in this matrix in terms of the latent matrices Z (N K ) and Y (K D). This can be formulated as a statistical problem if we view X as being produced by a probabilistic generative process, resulting in a probability distribution P (X|Z, Y). The critical assumption necessary to make this a matrix factorization problem is that the distribution of X is conditionally dependent on Z and Y only through the product ZY. Although defining P (X|Z, Y) allows us to use methods such as maximum-likelihood estimation to find a point estimate, our goal is to instead compute a posterior distribution over possible values of Z and Y. To do so we need to specify a prior over the latent matrices P (Z, Y), and then we can use Bayes' rule to find the posterior distribution over Z and Y P (Z, Y|X) P (X|Z, Y)P (Z, Y). (1 )\n\nThis constitutes Bayesian matrix factorization, but two problems remain: the choice of K , and the computational cost of estimating the posterior distribution. Unlike standard matrix factorization methods that require an a priori choice of K , nonparametric Bayesian approaches allow us to estimate a posterior distribution over Z and Y where the size of these matrices is unbounded. The models we discuss in this paper place a prior on Z that gives each \"left-ordered\" binary matrix (see [1] for details) probability P (Z) = k (N - mk )!(mk - 1)! K+ exp{-HN } 2N -1 N! =1 h =1 K h !\nK+\n\n(2 )\n\nwhere K+ is the number of columns of Z with non-zero entries, mk is the number of 1's in column N k , N is the number of rows, HN = i=1 1/i is the N th harmonic number, and Kh is the number of columns in Z that when read top-to-bottom form a sequence of 1's and 0's corresponding to the binary representation of the number h. This prior on Z is a distribution on sparse binary matrices that favors those that have few columns with many ones, with the rest of the columns being all zeros. This distribution can be derived as the outcome of a sequential generative process called the Indian buffet process (IBP) [1]. Imagine an Indian restaurant into which N customers arrive one by one and serve themselves from the buffet. The first customer loads her plate from the first Poisson() dishes. The ith customer chooses dishes proportional to their popularity, choosing a dish with probability mk /i where mk is the number of people who have choosen the k th dish previously, then chooses Poisson(/i) new dishes. If we record the choices of each customer on one row of a matrix whose columns correspond to a dishes on the buffet (1 if chosen, 0 if not) then (the left-ordered form of) that matrix constitutes a draw from the distribution in Eqn. 2. The order in which the customers enter the restaurant has no bearing on the distribution of Z (up to permutation of the columns), making this distribution exchangeable. In this work we assume that Z and Y are independent, with P (Z, Y) = P (Z)P (Y). As shown in Fig. 1, since we use the IBP prior for P (Z), Y is a matrix with an infinite number of rows and D columns. We can take any appropriate distribution for P (Y), and the infinite number of rows will not pose a problem because only K+ rows will interact with non-zero elements of Z. A posterior distribution over Z and Y implicitly defines a distribution over the effective dimensionality of these\n\n\f\nX\nN\n\nD\n\nZ\n\n1\n\nY\n\nD\n\n~\n\nN\n\n*\n\n1\n\nK +\n\nFigure 1: Nonparametric Bayesian matrix factorization. The data matrix X is the product of Z and Y, which have an unbounded number of columns and rows respectively. matrices, through K+ . This approach to nonparametric Bayesian matrix factorization has been used for both continuous [1, 7] and binary [2] data matrices X. Since the posterior distribution defined in Eqn. 1 is generally intractable, Gibbs sampling has previously been employed to construct a sample-based representation of this distribution. However, generally speaking, Gibbs sampling is slow, requiring each entry in Z and Y to be repeatedly updated conditioned on all of the others. This problem is compounded in contexts where the the number of rows of X increases as a consequence of new observations being introduced, where the Gibbs sampler would need to be restarted after the introduction of each new observation.\n\n3 Particle Filter Posterior Estimation\nOur approach addresses the problems faced by the Gibbs sampler by exploiting the fact that the prior on Z is recursively decomposable. To explain this we need to introduce new notation, let X(i) be the ith row of X, and X(1:i) and Z(1:i) be all the rows of X and Z up to i respectively. Note that because the IBP prior is recursively decomposable it is easy to sample from P (Z(1:i) |Z(1:i-1) ); to do so simply follow the IBP in choosing dishes for the ith customer given the record of which dishes were chosen by the first i - 1 customers (see Algorithm 1). Applying Bayes' rule, we can write the posterior on Z(1:i) and Y given X(1:i) in the following form P (Z(1:i) , Y|X(1:i) ) P (X(i) |Z(1:i) , Y, X(1:i-1) )P (Z(1:i) , Y|X(1:i-1) ). Here we do not index Y as it is always an infinite matrix.1 If we could evaluate P (Z(1:i-1) , Y|X(1:i-1) ), we could obtain weighted samples (or \"particles\") from P (Z(1:i) , Y|X(1:i) ) using importance sampling with a proposal distribution of Z P (Z(1:i) |Z(1:i-1) )P (Z(1:i-1) , Y|X(1:i-1) ) (4 ) P (Z(1:i) , Y|X(1:i-1) ) =\n(1:i-1)\n\n(3 )\n\nand taking\n\nw P (X(i) |Z() , Y() , X(1:i-1) )\nth\n\n(1:i)\n\n(5 )\n\nas the weight associated with the particle. However, we could also use a similar scheme to approximate P (Z(1:i-1) , Y|X(1:i-1) ) if we could evaluate P (Z(1:i-2) , Y|X(1:i-2) ). Following Eq. 4, we could then approximately generate a set of weighted particles from (1:i-1) ) for each partiP (Z(1:i) , Y|X(1:i-1) ) by using the IBP to sample a value from P (Z(1:i) |Z() cle from P (Z(1:i-1) , Y|X(1:i-1) ) and carrying forward the weights associated with those particles. This \"particle filtering\" procedure defines a recursive importance sampling scheme for the full posterior P (Z, Y|X), and is known as sequential importance sampling [8]. When applied in its basic form this procedure can produce particles with extreme weights, so we resample the particles at each iteration of the recursion from the distribution given by their normalized weights and set w = 1/L for all , which is a standard method known as sequential importance resampling [8]. The procedure defined in the previous paragraphs is a general-purpose particle filter for matrixfactorization models based on the IBP. This procedure will work even when the prior defined on\nIn practice, we need only keep track of the rows of Y that correspond to the non-empty columns of Z, as the posterior distribution for the remaining entries is just the prior. Thus, if new non-empty columns are added in moving from Z(i-1) to Z(i) , we need to expand the number of rows of Y that we represent accordingly.\n1\n\n\f\nAlgorithm 1 Sample P (Z(1:i) |Z(1:i-1) , ) using the Indian Buffet process\n1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: Z Z(1:i-1) if i = 1 then n sample Ki ew Poisson () n Zi,1:Ki ew 1 else K+ number of non-zero columns in Z for k = 1, . . . , K+ do m sample zi,k according to P (zi,k = 1) Bernoulli( -i,k ) i end for new sample Ki Poisson( i ) n Zi,K+ +1:K+ +Ki ew 1 end if Z(1:i) Z\n\nY is not conjugate to the likelihood (and is much simpler than other algorithms for using the IBP with non-conjugate priors, e.g. [9]). However, the procedure can be simplified further in special cases. The following example applications illustrate the particle filtering approach for two different models. In the first case, the prior over Y is conjugate to the likelihood which means that Y need not be represented. In the other case, although the prior is not conjugate and thus Y does need to be explicitly represented, we present a way to improve the efficiency of this general particle filtering approach by taking advantage of certain analytic conditionals. The particle filtering approach results in significant improvements in performance over Gibbs sampling in both models.\n\n4 A Conjugate Model: Infinite Linear-Gaussian Matrix Factorization\nIn this model, explained in detail in [1], the entries of both X and Y are continuous. We report results on the modeling of image data of the same kind as was originally used to demonstrate the model in [1]. Here each row of X is an image, each row of Z indicates the \"latent features\" present in that image, such as the objects it contains, and each column of Y indicates the pixel values associated with a latent feature. The likelihood for this image model is matrix Gaussian P (X|Z, Y, x ) = 1 1 exp{- 2 tr((X - ZY)T (X - ZY))} 2 2X (2 X )N D/2 1 1 exp{- 2 tr(YT Y)} 2 2Y (2 Y )K D/2\n\n2 where X is the noise variance. The prior on the parameters of the latent features is also Gaussian\n\nP (Y|Y ) =\n\n2 with each element having variance Y . Because both the likelihood and the prior are matrix Gaussian, they form a conjugate pair and Y can be integrated out to yield the collapsed likelihood,\n\nP (X|Z, x ) =\n\n1\n2 (N -K + )D K + D (2 )N D/2 X Y |ZT Z+ X IK+ |D/2 2 + Y\n\nexp{-\n2\n\n1 T -1 X)} (6) 2 tr(X 2X\n\nwhich is matrix Gaussian with covariance -1 = I - Z+ (ZT Z + X IK+ )-1 ZT . Here Z+ = 2 + + Y Z1:i,1:K + is the first K+ columns of Z and K+ is the number of non-zero columns of Z. 4.1 Particle Filter The use of a conjugate prior means that we do not need to represent Y explicitly in our particle filter. In this case the particle filter recursion shown in Eqns. 3 and 4 reduces to Z P (Z(1:i) |Z(1:i-1) )P (Z(1:i-1) |X(1:i-1) ) P (Z(1:i) |X(1:i) ) P (X(i) |Z(1:i) , X(1:i-1) )\n(1:i-1)\n\nand may be implemented as shown in Algorithm 2.\n\n\f\nAlgorithm 2 Particle filter for Infinite Linear Gaussian Model\n1: initialize L particles [Z ], = 1, . . . , L 2: for i = 1, . . . , N do 3: for = 1, . . . , L do (1:i) (1:i-1) 4: sample Z from Z using Algorithm 1 5: calculate w using Eqns. 5 and 7 6: end for 7: normalize particle weights 8: resample particles according to weight cumulative distribution 9: end for\n(0)\n\ny\n\n1,:\n\ny\n\n2,:\n\ny\n\n3,:\n\ny\n\n4,:\n\nz\n\n(i,:)\n\nY\n\nnoise\n\nx\n\ni,:\n\nFigure 2: Generation of X under the linear Gaussian model. The first four images (left to right) correspond to the true latent features, i.e. rows of Y. The fifth shows how the images get combined, with two source images added together by multiplying by a single row of Z, zi,: = [1 0 0 1]. The sixth is Gaussian noise. The seventh image is the resulting row of X. Reweighting the particles requires computing P (X(i) |Z(1:i) , X(1:i-1) ), the conditional probability of the most recent row of X given all the previous rows and Z. Since P (X(1:i) |Z(1:i) ) is matrix Gaussian we can find the required conditional distribution by following the standard rules for con2 ditioning in Gaussians. Letting -1 = -1 /X be the covariance matrix for X(1:i) given Z(1:i) , we can partition this matrix into four parts w A c - 1 = T cb here A is a matrix, c is a vector, and b is a scalar. Then the conditional distribution of X(i) is X(i) |Z(1:i) , X(1:i-1) Gaussian(cT A-1 X(1:i-1) , b - cT A-1 c). (7 ) This requires inverting a matrix A which grows linearly with the size of the data; however, A is highly structured and this can be exploited to reduce the cost of this inversion [10]. 4.2 Experiments We compared the particle filter in Algorithm 2 with Gibbs sampling on an image dataset similar to that used in [1]. Due to space limitations we refer the reader to [1] for the details of the Gibbs sampler for this model. As illustrated in Fig. 2, our ground-truth Y consisted of four different 6 6 latent images. A 100 4 binary ground-truth matrix Z was generated with by sampling from P (zi,k = 1) = 0.5. The observed matrix X was generated by adding Gaussian noise with X = 0.5 to each entry of ZY. Fig. 3 compares results from the particle filter and Gibbs sampler for this model. The performance of the models was measured by comparing a general error metric computed over the posterior distributions estimated by each approach. The error metric (the vertical axis in Figs. 3 and 5) was computed by taking the expectation of the matrix ZZT over the posterior samples produced by each algorithm and taking the summed absolute difference (i.e. L1 norm) between the upper triangular portion of E [ZZT ] computed over the samples and the upper triangular portion of the true ZZT (including the diagonal). See Fig. 4 for an illustration of the information conveyed by ZZT . This error metric measures the distance of the mean of the posterior to the ground-truth. It is zero if the mean of the distribution matches the ground truth. It grows as a function of the difference between the ground truth and the posterior mean, accounting both for any difference in the number of latent factors that are present in each observation and for any difference in the number of latent factors that are shared between all pairs of observations. The particle filter was run using many different numbers of particles, P . For each value of P , the particle filter was run 10 times. The horizontal axis location of each errorbar in the plot is the mean\n\n\f\n5000 4000 Error 3000 2000 1000 1 100 10000 25000 50000 10 1000 2500 5000 0 Gibbs Sampler Particle Filter\n\nWallclock runtime in sec.\n\nFigure 3: Performance results for particle filter vs. Gibbs sampling posterior estimation for the infinite linear Gaussian matrix factorization. Each point is an average over 10 runs with a particular number of particles or sweeps of the sampler P = [1, 10, 100, 500, 1000, 2500, 5000] left to right, and error bars indicate the standard deviation of the error. wall-clock computation time on 2 Ghz Athlon 64 processors running Matlab for the corresponding number of particles P while the error bars indicate the standard deviation of the error. The Gibbs sampler was run for varying numbers of sweeps, with the initial 10% of samples being discarded. The number of Gibbs sampler sweeps was varied and the results are displayed in the same way as described for the particle filter above. The results show that the particle filter attains low error in significantly less time than the Gibbs sampler, with the difference being an order or magnitude or more in most cases. This is a result of the fact that the particle filter considers only a single row of X on each iteration, reducing the cost of computing the likelihood.\n\n5 A Semi-Conjugate Model: Infinite Binary Matrix Factorization\nIn this model, first presented in the context of learning hidden causal structure [2], the entries of both X and Y are binary. Each row of X represents the values of a single observed variable across D trials or cases, each row of Y gives the values of a latent variable (a \"hidden cause\") across those trials or cases, and Z is the adjacency matrix of a bipartite Bayesian network indicating which latent variables influence which observed variables. Learning the hidden causal structure then corresponds to inferring Z and Y from X. The model fits our schema for nonparametric Bayesian matrix factorization model (and hence is amenable to the use of our particle filter) since the likelihood function it uses depends only on the product ZY. The likelihood fui ction for this model assumes that each entry of X is generated independently n P (X|Z, Y) = ,d P (xi,d |Z, Y ), with its probability given by the \"noisy-OR\" [11] of the causes that influence that variable (identified by the corresponding row of Z) and are active for that case or trial (expressed in Y). The probability that xi,d takes the value 1 is thus P (xi,d = 1|Z, Y) = 1 - (1 - )zi,: y:,d (1 - ) K (8 )\n\nwhere zi,: is the ith row of Z, y:,d is the dth column of Y, and zi,: y:,d = k=1 zi,k yk,d . The parameter sets the probability that xi,d = 1 when no relevant causes are active, and determines how this probability changes as the number of relevant active hidden causes increases. To complete the model, we assume that the entries of Y are generated independently from a Bernoulli process k yk,d (1 - p)1-yk,d , and use the IBP prior for Z. with parameter p, to give P (Y) = ,d p 5.1 Particle Filter In this model the prior over Y is not conjugate to the likelihood, so we are forced to explicitly represent Y in our particle filter state, as outlined in Eqns. 3 and 4. However, we can define a more efficient algorithm than the basic particle filter due to the tractability of some integrals. This is why we call this model a \"semi-conjugate\" model. The basic particle filter defined in Section 3 requires drawing the new rows of Y from the prior when we generate new columns of Z. This can be problematic since the chance of producing an assignment of values to Y that has high probability under the likelihood can be quite low, in effect wasting many particles. However, if we can analytically marginalize out the new rows of Y, we can avoid sampling those values from the prior and instead sample them from the posterior, in\n\n\f\nAlgorithm 3 Particle filter for Infinite Binary Matrix Factorization\n1: initialize L particles [Z , Y ], = 1, . . . , L 2: for i = 1, . . . , N do 3: for = 1, . . . , L do (i) (i-1) 4: sample Z from Z using Algorithm 1 5: calculate w using Eqns. 5 and 8 6: end for 7: normalize particle weights 8: resample particles according to weight CDF 9: for = 1, . . . , L do (i) (i) (1:i) (1:i-1) 10: sample Y from P (Y |Z , Y , X(1:i) ) 11: end for 12: end for\n(0) (0)\n\nFigure 4: Infinite binary matrix factorization results. On the left is ground truth, the causal graph representation of Z and ZZT . The middle and right are particle filtering results; a single random particle Z and E [ZZT ] from a 500 and 10000 particle run middle and right respectively. effect saving many of the potentially wasted particles. If we let Y(1:i) denote the rows of Y that correspond to the first i columns of Z and Y(i) denote the rows (potentially more than 1) of Y that are introduced to match the new columns appearing in Z(i) , then we can write P (Z(1:i) , Y(1:i) |X(1:i) ) = P (Y(i) |Z(1:i) , Y(1:i-1) , X(1:i) )P (Z(1:i) , Y(1:i-1) |X(1:i) ) wh e r e P (Z(1:i) , Y(1:i-1) |X(1:i) ) P (X(i) |Z(1:i) , Y(1:i-1) , X(1:i-1) )P (Z(1:i) , Y(1:i-1) |X(1:i-1) ). (1 0 ) Thus, we can use the particle filter to estimate P (Z(1:i) , Y(1:i-1) |X(1:i) ) (vs. P (Z(1:i) , Y(1:i) |X(1:i) )) provided that we can find a way to compute P (X(i) |Z(1:i) , Y(1:i-1) ) and sample from the distribution P (Y(i) |Z(1:i) , Y(1:i-1) , X(1:i) ) to complete our particles. The procedure described in the previous paragraph is possible in this model because, while our prior on Y is not conjugate to the likelihood, it is still possible to compute P (X(i) |Z(1:i) , Y(1:i-1) ). The entries of X(i) are independent given Z(1:i) and Y(1:i) . Since the entries in each column of Y(i) will influence only a single entry in X(i) , this independence is maintadned when we sum out Y(i) . So we i P (xi,d |Z(1:i) , Y(1:i-1) ) where can derive an analytic solution to P (X(i) |Z(1:i) , Y(1:i-1) ) = P (xi,d = 1|Z(1:i) , Y(1:i-1) ) = 1 - (1 - )(1 - ) (1 - p)Ki\n(i)\n+ + new\n\n(9 )\n\n(1 1 )\n\nwith\n\nn Ki ew\n\nbeing the number of new columns in Z , and = zi,1:K (1:i) y1:K (1:i) ,d . For a detailed\n\nderivation see [2]. This gives us the likelihood we need for reweighting particles Z(1:i) and Y(1:i-1) . The posterior distribution on Y(i) is straightforward to compute by combining the likelihood in Eqn. 8 with the prior P (Y). The particle filtering algorithm for this model is given in Algorithm 3. 5.2 Experiments We compared the particle filter in Algorithm 3 with Gibbs sampling on a dataset generated from the model described above, using the same Gibbs sampling algorithm and data generation procedure as developed in [2]. We took K+ = 4 and N = 6, running the IBP multiple times with = 3 until a matrix Z of correct dimensionality (6 4) was produced. This matrix is shown in Fig. 4 as a bipartite graph, where the observed variables are shaded. A 4 250 random matrix Y was generated with p = 0.1. The observed matrix X was then sampled from Eqn. 8 with parameters = .9 and = .01. Comparison of the particle filter and Gibbs sampling was done using the procedure outlined in Section 4.2, producing similar results: the particle filter gave a better approximation to the posterior distribution in less time, as shown in Fig. 5.\n\n\f\n50 40 Error 30 20 10 0 1 0.25 0. 5 2 5 100 Wallclock runtime in sec. 500 10 50 Gibbs Sampler Particle Filter\n\nFigure 5: Performance results for particle filter vs. Gibbs sampling posterior estimation for the infinite binary matrix factorization model. Each point is an average over 10 runs with a particular number of particles or sweeps of the sampler P = [1, 2, 5, 10, 20, 50, 100, 200, 500, 1000] from left to right, and error bars indicate the standard deviation of the error.\n\n6 Conclusion\nIn this paper we have introduced particle filter posterior estimation for non-parametric Bayesian matrix factorization models based on the Indian buffet process. This approach is applicable to any Bayesian matrix factorization model with a sparse recursively decomposable prior. We have applied this approach with two different models, one with a conjugate prior and one with a non-conjugate prior, finding significant computational savings over Gibbs sampling for each. However, more work needs to be done to explore the strengths and weakneses of these algorithms. In particular, simple sequential importance resampling is known to break down when applied to datasets with many observations, although we are optimistic that methods for addressing this problem that have been developed for Dirichlet process mixture models (e.g., [5]) will also be applicable in this setting. By exploring the strengths and weaknesses of different methods for approximate inference in these models, we hope to come closer to our ultimate goal of making nonparametric Bayesian matrix factorization into a tool that can be applied on the scale of real world problems.\nAcknowledgements This work was supported by both NIH-NINDS R01 NS 50967-01 as part of the NSF/NIH Collaborative Research in Computational Neuroscience Program and NSF grant 0631518.\n\nReferences\n[1] T. L. Griffiths and Z. Ghahramani, \"Infinite latent feature models and the Indian buffet process,\" Gatsby Computational Neuroscience Unit, Tech. Rep. 2005-001, 2005. [2] F. Wood, T. L. Griffiths, and Z. Ghahramani, \"A non-parametric Bayesian method for inferring hidden causes,\" in Proceeding of the 22nd Conference on Uncertainty in Artificial Intelligence. in press, 2006. [3] T. Ferguson, \"A Bayesian analysis of some nonparametric problems,\" The Annals of Statistics, vol. 1, pp. 209230, 1973. [4] R. M. Neal, \"Markov chain sampling methods for Dirichlet process mixture models,\" Department of Statistics, University of Toronto, Tech. Rep. 9815, 1998. [5] P. Fearnhead, \"Particle filters for mixture models with an unknown number of components,\" Journal of Statistics and Computing, vol. 14, pp. 1121, 2004. [6] S. N. MacEachern, M. Clyde, and J. Liu, \"Sequential importance sampling for nonparametric Bayes models: the next generation,\" The Canadian Journal of Statistics, vol. 27, pp. 251267, 1999. [7] T. Griffiths and Z. Ghahramani, \"Infinite latent feature models and the Indian buffet process,\" in Advances in Neural Information Processing Systems 18, Y. Weiss, B. Scholkopf, and J. Platt, Eds. Cambridge, MA: MIT Press, 2006. [8] A. Doucet, N. de Freitas, and N. Gordon, Sequential Monte Carlo Methods in Practice. Springer, 2001. [9] D. Gorur, F. Jakel, and C. R. Rasmussen, \"A choice model with infinitely many latent features,\" in Pro ceeding of the 23rd International Conference on Machine Learning, 2006. [10] S. Barnett, Matrix Methods for Engineers and Scientists. [11] J. Pearl, Probabilistic reasoning in intelligent systems. McGraw-Hill, 1979. San Francisco, CA: Morgan Kaufmann, 1988.\n\n\f\n", "award": [], "sourceid": 3147, "authors": [{"given_name": "Frank", "family_name": "Wood", "institution": null}, {"given_name": "Thomas", "family_name": "Griffiths", "institution": null}]}