{"title": "The Kernel Beta Process", "book": "Advances in Neural Information Processing Systems", "page_first": 963, "page_last": 971, "abstract": "A new Le \u0301vy process prior is proposed for an uncountable collection of covariate- dependent feature-learning measures; the model is called the kernel beta process (KBP). Available covariates are handled efficiently via the kernel construction, with covariates assumed observed with each data sample (\u201ccustomer\u201d), and latent covariates learned for each feature (\u201cdish\u201d). Each customer selects dishes from an infinite buffet, in a manner analogous to the beta process, with the added constraint that a customer first decides probabilistically whether to \u201cconsider\u201d a dish, based on the distance in covariate space between the customer and dish. If a customer does consider a particular dish, that dish is then selected probabilistically as in the beta process. The beta process is recovered as a limiting case of the KBP. An efficient Gibbs sampler is developed for computations, and state-of-the-art results are presented for image processing and music analysis tasks.", "full_text": "The Kernel Beta Process\n\nElectrical & Computer Engineering Dept.\n\nElectrical & Computer Engineering Dept.\n\nLu Ren\u2217\n\nDuke University\n\nDurham, NC 27708\nlr22@duke.edu\n\nDavid Dunson\n\nYingjian Wang\u2217\n\nDuke University\n\nDurham, NC 27708\nyw65@duke.edu\n\nLawrence Carin\n\nDuke University\n\nDurham, NC 27708\nlcarin@duke.edu\n\nDepartment of Statistical Science\n\nElectrical & Computer Engineering Dept.\n\nDuke University\n\nDurham, NC 27708\n\ndunson@stat.duke.edu\n\nAbstract\n\nA new L\u00b4evy process prior is proposed for an uncountable collection of covariate-\ndependent feature-learning measures; the model is called the kernel beta process\n(KBP). Available covariates are handled ef\ufb01ciently via the kernel construction,\nwith covariates assumed observed with each data sample (\u201ccustomer\u201d), and latent\ncovariates learned for each feature (\u201cdish\u201d). Each customer selects dishes from an\nin\ufb01nite buffet, in a manner analogous to the beta process, with the added constraint\nthat a customer \ufb01rst decides probabilistically whether to \u201cconsider\u201d a dish, based\non the distance in covariate space between the customer and dish. If a customer\ndoes consider a particular dish, that dish is then selected probabilistically as in\nthe beta process. The beta process is recovered as a limiting case of the KBP. An\nef\ufb01cient Gibbs sampler is developed for computations, and state-of-the-art results\nare presented for image processing and music analysis tasks.\n\n1\n\nIntroduction\n\nFeature learning is an important problem in statistics and machine learning, characterized by the goal\nof (typically) inferring a low-dimensional set of features for representation of high-dimensional data.\nIt is desirable to perform such analysis in a nonparametric manner, such that the number of features\nmay be learned, rather than a priori set. A powerful tool for such learning is the Indian buffet\nprocess (IBP) [4], in which the data samples serve as \u201ccustomers\u201d, and the potential features serve\nas \u201cdishes\u201d. It has recently been demonstrated that the IBP corresponds to a marginalization of a\nbeta-Bernoulli process [15]. The IBP and beta-Bernoulli constructions have found signi\ufb01cant utility\nin factor analysis [7, 17], in which one wishes to infer the number of factors needed to represent\ndata of interest. The beta process was developed originally by Hjort [5] as a L\u00b4evy process prior for\n\u201chazard measures\u201d, and was recently extended for use in feature learning [15], the interest of this\npaper; we therefore here refer to it as a \u201cfeature-learning measure.\u201d\nThe beta process is an example of a L\u00b4evy process [6], another example of which is the gamma\nprocess [1]; the normalized gamma process is well known as the Dirichlet process [3, 14]. A key\ncharacteristic of such models is that the data samples are assumed exchangeable, meaning that the\norder/indices of the data may be permuted with no change in the model.\n\n\u2217The \ufb01rst two authors contributed equally to this work.\n\n1\n\n\fAn important line of research concerns removal of the assumption of exchangeability, allowing\nincorporation of covariates (e.g., spatial/temporal coordinates that may be available with the data).\nAs an example, MacEachern introduced the dependent Dirichlet process [8]. In the context of feature\nlearning, the phylogenetic IBP removes the assumption of sample exchangeability by imposing\nprior knowledge on inter-sample relationships via a tree structure [9]. The form of the tree may be\nconstituted as a result of covariates that are available with the samples, but the tree is not necessarily\nunique. A dependent IBP (dIBP) model has been introduced recently, with a hierarchical Gaussian\nprocess (GP) used to account for covariate dependence [16]; however, the use of a GP may constitute\nchallenges for large-scale problems. Recently a dependent hierarchical beta process (dHBP) has\nbeen developed, yielding encouraging results [18]. However, the dHBP has the disadvantage of\nassigning a kernel to each data sample, and therefore it scales unfavorably as the number of samples\nincreases.\nIn this paper we develop a new L\u00b4evy process prior, termed the kernel beta process (KBP), which\nyields an uncountable number of covariate-dependent feature-learning measures, with the beta pro-\ncess a special case. This model may be interpreted as inferring covariates x\u2217\ni for each feature (dish),\nindexed by i. The generative process by which the nth data sample, with covariates xn, selects\nfeatures may be viewed as a two-step process. First the nth customer (data sample) decides whether\nni \u223c Bernoulli(K(xn, x\u2217\nto \u201cexamine\u201d dish i by drawing z(1)\ni are dish-dependent\nkernel parameters that are also inferred (the {\u03c8\u2217\ni } de\ufb01ning the meaning of proximity/locality in co-\nvariate space). The kernels are designed to satisfy K(xn, x\u2217\ni ) = 1,\nand K(xn, x\u2217\nni = 1, customer n draws\nni \u223c Bernoulli(\u03c0i), and if z(2)\nz(2)\nni = 1, the feature associated with dish i is employed by data sample\nn. The parameters {x\u2217\ni , \u03c0i} are inferred by the model. After computing the posterior distribu-\ntion on model parameters, the number of kernels required to represent the measures is de\ufb01ned by\nthe number of features employed from the buffet (typically small relative to the data size); this is a\nsigni\ufb01cant computational savings relative to [18, 16], for which the complexity of the model is tied\nto the number of data samples, even if a small number of features are ultimately employed.\nIn addition to introducing this new L\u00b4evy process, we examine its properties, and demonstrate how\nit may be ef\ufb01ciently applied in important data analysis problems. The hierarchical construction of\nthe KBP is fully conjugate, admitting convenient Gibbs-sampling (complicated sampling methods\nwere required for the method in [18]). To demonstrate the utility of the model we consider image-\nprocessing and music-analysis applications, for which state-of-the-art performance is demonstrated\ncompared to other relevant methods.\n\ni )), where \u03c8\u2217\ni ) \u2208 (0, 1], K(x\u2217\n\ni (cid:107)2 \u2192 \u221e. In the second step, if z(1)\n\ni ) \u2192 0 as (cid:107)xn \u2212 x\u2217\n\ni ; \u03c8\u2217\ni ; \u03c8\u2217\n\ni ; \u03c8\u2217\n\ni , \u03c8\u2217\n\ni , x\u2217\n\ni ; \u03c8\u2217\n\n2 Kernel Beta Process\n\n2.1 Review of beta and Bernoulli processes\nA beta process B \u223c BP(c, B0) is a distribution on positive random measures over the space (\u2126,F).\nParameter c(\u03c9) is a positive function over \u03c9 \u2208 \u2126, and B0 is the base measure de\ufb01ned over \u2126. The\nbeta process is an example of a L\u00b4evy process, and the L\u00b4evy measure of BP(c, B0) is\n\n(1)\nTo draw B, one draws a set of points (\u03c9i, \u03c0i) \u2208 \u2126 \u00d7 [0, 1] from a Poisson process with measure \u03bd,\nyielding\n\n\u03bd(d\u03c0, d\u03c9) = c(\u03c9)\u03c0\u22121(1 \u2212 \u03c0)c(\u03c9)\u22121d\u03c0B0(d\u03c9)\n\nB =\n\n\u03c0i\u03b4\u03c9i\n\n(2)\n\n(cid:82)\n\n\u03bb =(cid:82)\n\nwhere \u03b4\u03c9i\nis a unit point measure at \u03c9i; B is therefore a discrete measure, with probabil-\nity one. The in\ufb01nite sum in (2) is a consequence of drawing Poisson(\u03bb) atoms {\u03c9i, \u03c0i}, with\n\n[0,1] \u03bd(d\u03c9, d\u03c0) = \u221e. Additionally, for any set A \u2282 F, B(A) =(cid:80)\n\ni: \u03c9i\u2208A \u03c0i.\nIf Zn \u223c BeP(B) is the nth draw from a Bernoulli process, with B de\ufb01ned as in (2), then\n\n\u2126\n\n\u221e(cid:88)\n\ni=1\n\n\u221e(cid:88)\n\ni=1\n\n2\n\nZn =\n\nbni\u03b4\u03c9i ,\n\nbni \u223c Bernoulli(\u03c0i)\n\n(3)\n\n\fA set of N such draws, {Zn}n=1,N , may be used to de\ufb01ne whether feature \u03c9i \u2208 \u2126 is utilized to\nrepresent the nth data sample, where bni = 1 if feature \u03c9i is employed, and bni = 0 otherwise. One\nmay marginalize out the measure B analytically, yielding conditional probabilities for the {Zn} that\ncorrespond to the Indian buffet process [15, 4].\n\n2.2 Covariate-dependent L\u00b4evy process\nIn the above beta-Bernoulli construction, the same measure B \u223c BP(c, B0) is employed for gen-\neration of all {Zn}, implying that each of the N samples have the same probabilities {\u03c0i} for use\nof the respective features {\u03c9i}. We now assume that with each of the N samples of interest there\nare an associated set of covariates, denoted respectively as {xn}, with each xn \u2208 X . We wish to\nimpose that if samples n and n(cid:48) have similar covariates xn and xn(cid:48), that it is probable that they will\nemploy a similar subset of the features {\u03c9i}; if the covariates are distinct it is less probable that\nfeature sharing will be manifested.\nGeneralizing (2), consider\n\n\u221e(cid:88)\nspeci\ufb01c to covariate x \u2208 X being Bx = (cid:80)\u221e\n\n(4)\nwhere \u03b3i = {\u03b3i(x) : x \u2208 X} is a stochastic process (random function) from X \u2192 [0, 1] (drawn in-\ndependently from the {\u03c9i}). Hence, B is a dependent collection of L\u00b4evy processes with the measure\ni=1 \u03b3i(x)\u03b4\u03c9i. This constitutes a general speci\ufb01cation,\nwith several interesting special cases. For example, one might consider \u03b3i(x) = g{\u00b5i(x)}, where\ng : R \u2192 [0, 1] is any monotone differentiable link function and \u00b5i(x) : X \u2192 R may be modeled\nas a Gaussian process [10], or related kernel-based construction. To choose g{\u00b5i(x)} one can po-\ntentially use models for the predictor-dependent breaks in probit, logistic or kernel stick-breaking\nprocesses [13, 11, 2]. In the remainder of this paper we propose a special case for design of \u03b3i(x),\ntermed the kernel beta process (KBP).\n\n\u03b3i\u03b4\u03c9i , \u03c9i \u223c B0\n\nB =\n\ni=1\n\n2.3 Characteristic function of the kernel beta process\nRecall from Hjort [5] that B \u223c BP(c(\u03c9), B0) is a beta process on measure space (\u2126,F) if its\ncharacteristic function satis\ufb01es\n\n(cid:90)\n\n[0,1]\u00d7A\n\nE[ejuB(A)] = exp{\n\n(eju\u03c0 \u2212 1)\u03bd(d\u03c0, d\u03c9)}\n\n(5)\n\n\u221a\u22121, and A is any subset in F. The beta process is a particular class of the L\u00b4evy\nwhere here j =\nprocess, with \u03bd(d\u03c0, d\u03c9) de\ufb01ned as in (1).\nFor kernel K(x, x\u2217; \u03c8\u2217), let x \u2208 X , x\u2217 \u2208 X , and \u03c8\u2217 \u2208 \u03a8; it is assumed that K(x, x\u2217; \u03c8\u2217) \u2208 [0, 1]\nfor all x, x\u2217 and \u03c8\u2217. As a speci\ufb01c example, for the radial basis function K(x, x\u2217; \u03c8\u2217) =\nexp[\u2212\u03c8\u2217(cid:107)x \u2212 x\u2217(cid:107)2], where \u03c8\u2217 \u2208 R+. Let x\u2217 represent random variables drawn from proba-\nbility measure H, with support on X , and \u03c8\u2217 is also a random variable drawn from an appropriate\nprobability measure Q with support over \u03a8 (e.g., in the context of the radial basis function, \u03c8\u2217 are\ndrawn from a probability measure with support over R+). We now de\ufb01ne a new L\u00b4evy measure\n\n\u03bdX = H(dx\u2217)Q(d\u03c8\u2217)\u03bd(d\u03c0, d\u03c9)\n\n(6)\n\nwhere \u03bd(d\u03c0, d\u03c9) is the L\u00b4evy measure associated with the beta process, de\ufb01ned in (1).\nTheorem 1 Assume parameters {x\u2217\nfollowing measure is constituted\n\ni , \u03c0i, \u03c9i} are drawn from measure \u03bdX in (6), and that the\n\u221e(cid:88)\n\ni )\u03b4\u03c9i\nwhich may be evaluated for any covariate x \u2208 X .\nFor any \ufb01nite set of co-\nvariates S = {x1, . . . , x|S|}, we de\ufb01ne the |S|-dimensional\nrandom vector K =\n(K(x1, x\u2217; \u03c8\u2217), . . . , K(x|S|, x\u2217; \u03c8\u2217))T , with random variables x\u2217 and \u03c8\u2217 drawn from H and\nthe B evaluated at covariates S, on the set A,\nQ, respectively.\n\nFor any set A \u2282 F,\n\n\u03c0iK(x, x\u2217\n\nBx =\n\ni ; \u03c8\u2217\n\n(7)\n\ni , \u03c8\u2217\n\ni=1\n\n3\n\n\f(cid:80)\nyields an |S|-dimensional random vector B(A) = (Bx1 (A), . . . ,Bx|S| (A))T , where Bx(A) =\ni: \u03c9i\u2208A \u03c0iK(x, x\u2217\ni ). Expression (7) is a covariate-dependent L\u00b4evy process with L\u00b4evy mea-\nsure (6), and characteristic function for an arbitrary set of covariates S satisfying\n\ni ; \u03c8\u2217\n\nE[ej__] = exp{\n\n(ej____ \u2212 1)\u03bdX (dx\u2217, d\u03c8\u2217, d\u03c0, d\u03c9)}\n\n(8)\n\nX\u00d7\u03a8\u00d7[0,1]\u00d7A\n\n2\n\nA proof is provided in the Supplemental Material. Additionally, for notational convenience, below a\ndraw of (7), valid for all covariates in X , is denoted B \u223c KBP(c, B0, H, Q), with c and B0 de\ufb01ning\n\u03bd(d\u03c0, d\u03c9) in (1).\n\n(cid:90)\n\ni ; \u03c8\u2217\n\nxi z(2)\n\nxi , with z(1)\n\nZx =(cid:80)\u221e\n\ni=1 bxi\u03b4\u03c9i, with bxi \u223c Bernoulli(\u03c0iK(x, x\u2217\n\n2.4 Relationship to the beta-Bernoulli process\nIf the covariate-dependent measure Bx in (7) is employed to de\ufb01ne covariate-dependent feature us-\ni , \u03c0i}, the feature-usage measure is\nage, then Zx \u223c BeP(Bx), generalizing (3). Hence, given {x\u2217\ni )). Note that it is equivalent in distribution\nxi \u223c Bernoulli(K(x, x\u2217\nxi \u223c Bernoulli(\u03c0i). This\nto express bxi = z(1)\nmodel therefore yields the two-step generalization of the generative process of the beta-Bernoulli\nprocess discussed in the Introduction. The condition z(1)\nxi = 1 only has a high probability when\nobserved covariates x are near the (latent/inferred) covariates x\u2217\ni . It is deemed attractive that this\nintuitive generative process comes as a result of a rigorous L\u00b4evy process construction, the properties\nof which are summarized next.\n2.5 Properties of B\nFor all Borel subsets A \u2208 F, if B is drawn from the KBP and for covariates x, x(cid:48) \u2208 X , we have\n\ni , \u03c8\u2217\ni ; \u03c8\u2217\n\ni )) and z(2)\n\n(cid:90)\n\nE[Bx(A)] = B0(A)E(Kx)\n\n(cid:90)\n\nB0(d\u03c9)(1 \u2212 B0(d\u03c9))\n\nCov(Bx(A),Bx(cid:48)(A)) = E(KxKx(cid:48))\n\nwhere, E(Kx) = (cid:82)\n\nX\u00d7\u03a8 K(x, x\u2217; \u03c8\u2217)H(dx\u2217)Q(d\u03c8\u2217).\n\n0 (d\u03c9)\nIf K(x, x\u2217; \u03c8\u2217) = 1 for all x \u2208 X ,\nE(Kx) = E(KxKx(cid:48)) = 1, and Cov(Kx, Kx(cid:48)) = 0, and the above results reduce to the those\nfor the original BP [15].\nAssume c(\u03c9) = c, where c \u2208 R+ is a constant, and let Kx = (K(x, x\u2217\n1; \u03c8\u2217\nrepresent an in\ufb01nite-dimensional vector, then for \ufb01xed kernel parameters {x\u2217\n\n1), K(x, x\u2217\ni },\ni , \u03c8\u2217\n\n\u2212 Cov(Kx, Kx(cid:48))\n\n2), . . . )T\n\n2; \u03c8\u2217\n\nc(\u03c9) + 1\n\nA\n\nB2\n\nA\n\nCorr(Bx(A),Bx(cid:48)(A)) =\n\n(9)\nwhere it is assumed < Kx, Kx(cid:48) >, (cid:107)Kx(cid:107)2, (cid:107)Kx(cid:48)(cid:107)2 are \ufb01nite; the latter condition is always met\nwhen we (in practice) truncate the number of terms used in (7). The expression in (9) clearly imposes\nthe desired property of high correlation in Bx and Bx(cid:48) when x and x(cid:48) are proximate.\nProofs of the above properties are provided in the Supplemental Material.\n\n< Kx, Kx(cid:48) >\n(cid:107)Kx(cid:107)2 \u00b7 (cid:107)Kx(cid:48)(cid:107)2\n\n3 Applications\n\n3.1 Model construction\n\nWe develop a covariate-dependent factor model, generalizing [7, 17], which did not consider covari-\nates. Consider data yn \u2208 RM with associated covariates xn \u2208 RL, with n = 1, . . . , N. The factor\nloadings in the factor model here play the role of \u201cdishes\u201d in the buffet analogy, and we model the\ndata as\n\nZxn \u223c BeP(Bxn),\n\nyn = D(wn \u25e6 bn) + \u0001n\nB \u223c KBP(c, B0, H, Q),\n\nwn \u223c N (0, \u03b1\u22121\n\n1 IT ),\n\n\u0001n \u223c N (0, \u03b1\u22121\n\nB0 \u223c DP(\u03b10G0)\n2 IM )\n\n(10)\n\n4\n\n\fwith gamma priors placed on \u03b10, \u03b11 and \u03b12, with \u25e6 representing the pointwise (Hadamard) vector\nproduct, and with IM representing the M \u00d7 M identity matrix. The Dirichlet process [3] base\nmeasure G0 = N (0, 1\nM IM ), and the KBP base measure B0 is a mixture of atoms (factor loadings).\nFor the applications considered it is important that the same atoms be reused at different points {x\u2217\ni }\nin covariate space, to allow for repeated structure to be manifested as a function of space or time,\nwithin the image and music applications, respectively. The columns of D are de\ufb01ned respectively\nby (\u03c91, \u03c92, . . . ) in B, and the vector bn = (bn1, bn2, . . . ) with bnk = Zxn (\u03c9k). Note that B is\ndrawn once from the KBP, and when drawing the Zxn we evaluate B as de\ufb01ned by the respective\ncovariate xn.\nWhen implementing the KBP, we truncate the sum in (7) to T terms, and draw the \u03c0i \u223c\nBeta(1/T, 1), which corresponds to setting c = 1. We set T large, and the model infers the subset\nof {\u03c0i}i=1,T that have signi\ufb01cant amplitude, thereby estimating the number of factors needed for\nrepresentation of the data. In practice we let H and Q be multinomial distributions over a discrete\nand \ufb01nite set of, respectively, locations for {x\u2217\ni }, details of which\nare discussed in the speci\ufb01c examples.\nIn (10), the ith column of D, denoted Di, is drawn from B0, with B0 drawn from a Dirichlet\nprocess (DP). There are multiple ways to perform such DP clustering, and here we apply the P\u00b4olya\nurn scheme [3]. Assume D1, D2, . . . , Di\u22121 are a series of i.i.d. random draws from B0, then the\nsuccessive conditional distribution of Di is of the following form:\n\ni } and kernel parameters for {\u03c8\u2217\n\nDi|D1, . . . , Di\u22121, \u03b10, G0 \u223c Nu(cid:88)\n\nn\u2217\n\nl\n\ni \u2212 1 + \u03b10\n\nl =(cid:80)i\u22121\n\n(11)\nl }l=1,Nu are the unique dictionary elements shared by the \ufb01rst i \u2212 1 columns of D, and\nj=1 \u03b4(Dj = D\u2217\nl ). For model inference, an indicator variable ci is introduced for each Di,\nl , with l = 1, . . . , Nu, with ci equal to Nu + 1 with\nl ; otherwise Di is\nNu+1 is hence introduced.\n\nwhere {D\u2217\nn\u2217\nand ci = l with a probability proportional to n\u2217\na probability controlled by \u03b10. If ci = l for l = 1, . . . , Nu, Di takes the value D\u2217\ndrawn from the prior G0 = N (0, 1\n\nM IM ), and a new dish/factor loading D\u2217\n\ni \u2212 1 + \u03b10\n\n\u03b4D\u2217\n\nG0,\n\nl=1\n\n+\n\n\u03b10\n\nl\n\n3.2 Extensions\n\nIt is relatively straightforward to include additional model sophistication into (10), one example\nof which we will consider in the context of the image-processing example. Speci\ufb01cally, in many\napplications it is inappropriate to assume a Gaussian model for the noise or residual \u0001n. In Section\n4.3 we consider the following augmented noise model:\n\n\u0001n = \u03bbn \u25e6 mn + \u02c6\u0001n\n\n(12)\n\n\u03bbn \u223c N (0, \u03b1\u22121\n\n\u03bb IM ), mnp \u223c Bernoulli(\u02dc\u03c0n), \u02dc\u03c0n \u223c Beta(a0, b0), \u02c6\u0001n \u223c N (0, \u03b1\u22121\n\nwith gamma priors placed on \u03b1\u03bb and \u03b12, and with p = 1, . . . , M. The term \u03bbn \u25e6 mn accounts for\n\u201cspiky\u201d noise, with potentially large amplitude, and \u02c6\u03c0n represents the probability of spiky noise in\ndata sample n. This type of noise model was considered in [18], with which we compare.\n\n3 IM )\n\n3.3 Inference\n\nThe model inference is performed with a Gibbs sampler. Due to the limited space, only those\nvariables having update equations distinct from those in the BP-FA of [17] are included here.\nAssume T is the truncation level for the number of dictionary elements, {Di}i=1,T ; Nu is the\nnumber of unique dictionary elements values in the current Gibbs iteration, {D\u2217\nl }l=1,Nu. For the\napplications considered in this paper, K(xn, x\u2217\ni ) is de\ufb01ned based on the Euclidean distance:\nK(xn, x\u2217\ni are updated from multi-\nnomial distributions (de\ufb01ning Q and H, respectively) over a set of discretized values with a uniform\nprior for each; more details on this are discussed in Sec. 4.\n\ni ||2] for i = 1, . . . , T ; both \u03c8\u2217\n\ni ) = exp[\u2212\u03c8\u2217\n\ni ||xn \u2212 x\u2217\n\ni and x\u2217\n\ni ; \u03c8\u2217\n\ni ; \u03c8\u2217\n\n\u2022 Update {D\u2217\n\nl \u223c N (\u00b5l, \u03a3l),\nl }l=1,L: D\u2217\nN(cid:88)\n(cid:88)\n(bniwni)y\u2212l\n\n\u00b5l = \u03a3l[\u03b12\n\nn ], \u03a3l = [\u03b12\n\nn=1\n\ni:ci=l\n\n5\n\nN(cid:88)\n\n(cid:88)\n\nn=1\n\ni:ci=l\n\n(bniwni)2 + M ]\u22121IM ,\n\n\fn = yn \u2212(cid:80)\n(cid:40) n\u2217\n\nwhere y\u2212l\n\ni:ci(cid:54)=l Di(bniwni).\n\n\u2212i\n\np(ci = l|\u2212) \u221d\n\n\u2022 Update {ci}i=1,T : p(ci) \u223c Mult(pi),\n\n(cid:81)N\n(cid:81)N\nn=1 exp{\u2212 \u03b12\nT\u22121+\u03b10\nn=1 exp{\u2212 \u03b12\nT\u22121+\u03b10\nwhere n\u2217\nj:j(cid:54)=i \u03b4(Dj = D\u2217\nl ), and y\u2212i\nby normalizing the above equation.\n\n\u2212i =(cid:80)\n\n\u03b10\n\nl\n\nl\n\n2 (cid:107)y\u2212i\n2 (cid:107)y\u2212i\n\nn \u2212 D\u2217\nn \u2212 D\u2217\n\nl (bniwni)(cid:107)2\n2},\nif l is previously used,\nlnew (bniwni)(cid:107)2\n2},\nk:k(cid:54)=i Dk(bnkwnk); pi is realized\nfor Zxn, update each component p(bni) \u223c Bernoulli(vni) for\n\nn = yn \u2212(cid:80)\n\nif l = lnew,\n\n\u2022 Update {Zxn}n=1,N :\n\ni = 1, . . . , K,\n\n(cid:2)DT\n\n(cid:3)}\u03c0iK(xn, x\u2217\n\ni ; \u03c8\u2217\ni )\n\n.\n\nexp{\u2212 \u03b12\n\n2\n\np(bni = 1)\np(bni = 0)\n\n=\n\ni Diw2\n\nni \u2212 2wniDT\n1 \u2212 \u03c0iK(xn, x\u2217\n\ni y\u2212i\ni ; \u03c8\u2217\ni )\n\nn\n\nvni is calculated by normalizing p(bni) with the above constraint.\n\n\u2022 Update {\u03c0i}i=1,T :\n\nni }i=1,T for each data yn.\nni }i=1,T and {z(2)\ni ; \u03c8\u2217\ni )). For each speci\ufb01c n,\n\nni = 1 and z(2)\n\n\u2013 If bni = 1, z(1)\n\nIntroduce two sets of auxiliary variables {z(1)\nAssume z(1)\n\nni \u223c Bernoulli(K(xn, x\u2217\n\nni \u223c Bernoulli(\u03c0i) and z(2)\nni = 1;\ni ;\u03c8\u2217\nni = 0|bni = 0) =\n1\u2212\u03c0iK(xn,x\u2217\ni ;\u03c8\u2217\ni )\nni = 1|bni = 0) = (1\u2212\u03c0i)K(xn,x\u2217\ni ;\u03c8\u2217\ni )\n1\u2212\u03c0iK(xn,x\u2217\ni ;\u03c8\u2217\ni )\ni ;\u03c8\u2217\nni = 0|bni = 0) =\n\u03c0i\n1\u2212\u03c0iK(xn,x\u2217\ni ;\u03c8\u2217\ni )\nFrom the above equations, we derive the conditional distribution for \u03c0i,\n\n(1\u2212\u03c0i)(cid:0)1\u2212K(xn,x\u2217\n(cid:0)1\u2212K(xn,x\u2217\ni )(cid:1)\n(cid:88)\nni )(cid:1).\n\nni = 0, z(2)\nni = 0, z(2)\nni = 1, z(2)\n\n\u03c0i \u223c Beta(cid:0) 1\n\n\uf8f1\uf8f4\uf8f4\uf8f4\uf8f2\uf8f4\uf8f4\uf8f4\uf8f3\n\n\u2013 If bni = 0,\n\np(z(1)\np(z(1)\n\n(1 \u2212 z(1)\n\n(cid:88)\n\nz(1)\nni , 1 +\n\np(z(1)\n\ni )(cid:1)\n\n+\n\nT\n\nn\n\nn\n\n4 Results\n\n4.1 Hyperparameter settings\nFor both \u03b11 and \u03b12 the corresponding prior was set to Gamma(10\u22126, 10\u22126); the concentration pa-\nrameter \u03b10 was given a prior Gamma(1, 0.1). For both experiments below, the number of dictionary\nelements T was truncated to 256, the number of unique dictionary element values was initialized\nto 100, and {\u03c0i}i=1,T were initialized to 0.5. All {\u03c8\u2217\ni }i=1,T were initialized to 10\u22125 and updated\nfrom a set {10\u22125, 10\u22124, 10\u22123, 10\u22122, 10\u22121, 1} with a uniform prior Q. The remaining variables were\ninitialized randomly. No parameter tuning or optimization has been performed.\n\n4.2 Music analysis\n\nWe consider the same music piece as described in [12]: \u201cA Day in the Life\u201d from the Beatles\u2019 album\nSgt. Pepper\u2019s Lonely Hearts Club Band. The acoustic signal was sampled at 22.05 KHz and divided\ninto 50 ms contiguous frames; 40-dimensional Mel frequency cepstral coef\ufb01cients (MFCCs) were\nextracted from each frame, shown in Figure 1(a).\nA typical goal of music analysis is to infer interrelationships within the music piece, as a function\nof time [12]. For the audio data, each MFCC vector yn has an associated time index, the latter used\nas the covariate xn. The \ufb01nite set of temporal sample points (covariates) were employed to de\ufb01ne a\nlibrary for the {x\u2217\ni }, and H is a uniform distribution over this set. After 2000 burn-in iterations, we\ncollected samples every \ufb01ve iterations. Figure 1(b) shows the frequency for the number of unique\ndictionary elements used by the data, based on the 1600 collected samples; and Figure 1(c) shows\nthe frequency for the number of total dictionary elements used.\nWith the model de\ufb01ned in (10), the sparse vector bn\u25e6wn indicates the importance of each dictionary\nelement from {Di}i=1,T to data yn. Each of these N vectors {bn \u25e6 wn}n=1,N was normalized\n\n6\n\n\f(a)\n\n(b)\n\n(c)\n\nFigure 1: (a) MFCCs features used in music analysis, where the horizontal axis corresponds to time, for\n\u201cA Day in the Life\u201d. Based on the Gibbs collection samples: (b) frequency on number of unique dictionary\nelements, and (c) total number of dictionary elements.\nwithin each Gibbs sample, and used to compute a correlation matrix associated with the N time\npoints in the music. Finally, this matrix was averaged across the collection samples, to yield a\ncorrelation matrix relating one part of the music to all others. For a fair comparison between our\nmethods and the model proposed in [12] (which used an HMM, and computed correlations over\nwindows of time), we divided the whole piece into multiple consecutive short-time windows. Each\ntemporal window includes 75 consecutive feature vectors, and we compute the average correlation\ncoef\ufb01cients between the features within each pair of windows. There were 88 temporal windows\nin total (each temporal window is de noted as a sequence in Figure 2), and the dimension of the\ncorrelation matrix is accordingly 88 \u00d7 88. The computed correlation matrix for the proposed KBP\nmodel is presented in Figure 2(a).\nWe compared KBP performance with results based on BP-FA [17] in which covariates are not em-\nployed, and with results from the dynamic clustering model in [12], in which a dynamic HMM is\nemployed (in [12] a dynamic HDP, or dHDP, was used in concert with an HMM). The BP-FA results\ncorrespond to replacing the KBP with a BP. The correlation matrix computed from the BP-FA and\nthe dHDP-HMM [12] are shown in Figures 2(b) and (c), respectively. The dHDP-HMM results yield\na reasonably good segmentation of the music, but it is unable to infer subtle differences in the music\nover time (for example, all voices in the music are clustered together, even if they are different).\nSince the BP-FA does not capture as much localized information in the music (the probability of\ndictionary usage is the same for all temporal positions), it does not manifest as good a music seg-\nmentation as the dHDP-HMM. By contrast, the KBP-FA model yields a good music segmentation,\nwhile also capturing subtle differences in the music over time (e.g., in voices). Note that the use of\nthe DP to allow repeated use of dictionary elements as a function of time (covariates) is important\nhere, due to the repetition of structure in the piece. One may listen to the music and observe the\nsegmentation at http://www.youtube.com/watch?v=35YhHEbIlEI.\n\n(a)\n\n(b)\n\n(c)\n\nFigure 2: Inference of relationships in music as a function of time, as computed via a correlation of the\ndictionary-usage weights, for (a) and (b), and based upon state usage in an HMM, for (c). Results are shown\nfor \u201cA Day in the Life.\u201d The results in (c) are from [12], as a courtesy from the authors of that paper. (a)\nKBP-FA, (b) BP-FA, (c) dHDP-HMM .\n\n4.3\n\nImage interpolation and denoising\n\nWe consider image interpolation and denoising as two additional potential applications. In both of\nthese examples each image is divided into N 8 \u00d7 8 overlapping patches, and each patch is stacked\ninto a vector of length M = 64, constituting observation yn \u2208 RM . The covariate xn represents the\n\n7\n\nobservation indexfeature values 100020003000400050006000510152025303540\u22126\u22124\u22122024253035404550550100200300400500600The number of unique dictionary elementsFrequency calculated from the collected samples165170175180185190195200205050100150200250300The number of dictionary elements taken by the dataFrequency calculated from the collected samplesSequence indexSequence index 10203040506070801020304050607080\u22120.100.10.20.30.40.50.60.70.80.9Sequence indexSequence index 102030405060708010203040506070800.80.850.90.951sequence indexsequence index102030405060708010203040506070800.10.20.30.40.50.60.70.80.9\fpatch coordinates in the 2-D space. The probability measure H corresponds to a uniform distribution\nover the centers of all 8 \u00d7 8 patches. The images were recovered based on the average of the\ncollection samples, and each pixel was averaged across all overlapping patches in which it resided.\nFor the image-processing examples, 5000 Gibbs samples were run, with the \ufb01rst 2000 discarded as\nburn-in.\nFor image interpolation, we only observe a fraction of the image pixels, sampled uniformly at ran-\ndom. The model infers the underlying dictionary D in the presence of this missing data, as well as\nthe weights on the dictionary elements required for representing the observed components of {yn};\nusing the inferred dictionary and associated weights, one may readily impute the missing pixel val-\nues. In Table 1 we present average PSNR values on the recovered pixel values, as a function of\nthe fraction of pixels that are observed (20% in Table 1 means that 80% of the pixels are missing\nuniformly at random). Comparisons are made between a model based on BP and one based on the\nproposed KBP; the latter generally performs better, particularly when a large fraction of the pixels\nare missing. The proposed algorithm yields results that are comparable to those in [18], which also\nemployed covariates within the BP construc tion. However, the proposed KBP construction has\nthe signi\ufb01cant computational advantages of only requiring kernels centered at the locations of the\ni }, while the model in [18] has a kernel for each of the image\ndictionary-dependent covariates {x\u2217\npatches, and therefore it scales unfavorably for large images.\nTable 1: Comparison of BP and KBP for interpolating images with pixels missing uniformly at random,\nusing standard image-processing images. The top and bottom rows of each cell show results of BP and KBP,\nrespectively. Results are shown when 20%, 30% and 50% of the pixels are observed, selected uniformly at\nrandom.\nRATIO\n\nBARBARA\n\n20%\n\n30%\n\n50%\n\nC.MAN\n23.75\n24.02\n25.59\n25.75\n28.66\n28.78\n\nHOUSE\n29.75\n30.89\n33.09\n34.02\n38.26\n38.35\n\nPEPPERS\n\n25.56\n26.29\n28.64\n29.29\n32.53\n32.69\n\nLENA\n30.97\n31.38\n33.30\n33.33\n36.79\n35.89\n\n26.84\n28.93\n30.13\n31.46\n35.95\n36.03\n\nBOATS\n27.84\n28.11\n30.20\n30.24\n33.05\n33.18\n\nF.PRINT\n26.49\n26.89\n29.23\n29.37\n33.50\n32.18\n\nMAN\n28.29\n28.37\n29.89\n30.12\n33.19\n32.35\n\nCOUPLE\n27.76\n28.03\n29.97\n30.33\n33.61\n32.35\n\nHILL\n29.38\n29.67\n31.19\n31.25\n34.19\n32.60\n\nIn the image-denoising example in Figure 3 the images were corrupted with both white Gaussian\nnoise (WGN) and sparse spiky noise, as considered in [18]. The sparse spiky noise exists in partic-\nular pixels, selected uniformly at random, with amplitude distributed uniformly between \u2212255 and\n255. For the pepper image, 15% of the pixels were corrupted by spiky noise, and the standard devi-\nation of the WGN was 15; for the house image, 10% of the pixels were corrupted by spiky noise and\nthe standard deviation of WGN was 10. We compared with different methods on both two images:\nthe augmented KBP-FA model (KBP-FA+) in Sec. 3.2, the BP-FA model augmented with a term for\nspiky noise (BP-FA+) and the original BP-FA model. The model proposed with KBP showed the\nbest denoising result for both visual and quantitative evaluations. Again, these results are compara-\nble to those in [18], with the signi\ufb01cant computational advant age discussed above. Note that here\nthe imposition of covariates and the KBP yields marked improvements in this application, relative\nto BP-FA alone.\n\nFigure 3: Denoising Result: the \ufb01rst column shows the noisy images (PSNR is 15.56 dB for Peppers and\n17.54 dB for House); the second and third column shows the results inferred from the BP-FA model (PSNR\nis 16.31 dB for Peppers and 17.95 dB for House), with the dictionary elements shown in column two and the\nreconstruction in column three; the fourth and \ufb01fth columns show results from BP-FA+ (PSNR is 23.06 dB\nfor Peppers and 26.71 dB for House); the sixth and seventh column shows the results of the KBP-FA+ (PSNR\nis 27.37 dB for Peppers and 34.89 dB for House). In each case the dictionaries are ordered based on their\nfrequency of usage, starting from top-left.\n\n8\n\n\f5 Summary\n\nA new L\u00b4evy process, the kernel beta process, has been developed for the problem of nonparametric\nBayesian feature learning, with example results presented for music analysis, image denoising, and\nimage interpolation. In addition to presenting theoretical properties of the model, state-of-the-art\nresults are realized on these learning tasks. The inference is performed via a Gibbs sampler, with\nanalytic update equations. Concerning computational costs, for the music-analysis problem, for\nexample, the BP model required around 1 second per Gibbs iteration, with KBP requiring about 3\nseconds, with results run on a PC with 2.4GHz CPU, in non-optimized MatlabTM.\n\nAcknowledgment\n\nThe research reported here was supported by AFOSR, ARO, DARPA, DOE, NGA and ONR.\n\nReferences\n[1] D. Applebaum. Levy Processes and Stochastic Calculus. Cambridge University Press, 2009.\n[2] D. B. Dunson and J.-H. Park. Kernel stick-breaking processes. Biometrika, 95:307\u2013323, 2008.\n[3] T. Ferguson. A Bayesian analysis of some nonparametric problems. The Annals of Statistics, 1973.\n[4] T. L. Grif\ufb01ths and Z. Ghahramani. In\ufb01nite latent feature models and the Indian buffet process. In NIPS,\n\n2005.\n\n[5] N. L. Hjort. Nonparametric Bayes estimators based on beta processes in models for life history data.\n\nAnnals of Statistics, 1990.\n\n[6] J.F.C. Kingman. Poisson Processes. Oxford Press, 2002.\n[7] D. Knowles and Z. Ghahramani.\n\nanalysis. In Independent Component Analysis and Signal Separation, 2007.\n\nIn\ufb01nite sparse factor analysis and in\ufb01nite independent components\n\n[8] S. N. MacEachern. Dependent Nonparametric Processes. In In Proceedings of the Section on Bayesian\n\nStatistical Science, 1999.\n\n[9] K. Miller, T. Grif\ufb01ths, and M. I. Jordan. The phylogenetic Indian buffet process: A non-exchangeable\n\nnonparametric prior for latent features. In UAI, 2008.\n\n[10] C.E. Rasmussen and C. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.\n[11] L. Ren, L. Du, L. Carin, and D. B. Dunson. Logistic stick-breaking process. J. Machine Learning\n\nResearch, 2011.\n\n[12] L. Ren, D. Dunson, S. Lindroth, and L. Carin. Dynamic nonparametric bayesian models for analysis of\n\nmusic. Journal of The American Statistical Association, 105:458\u2013472, 2010.\n\n[13] A. Rodriguez and D. B. Dunson. Nonparametric bayesian models through probit stickbreaking processes.\n\nUniv. California Santa Cruz Technical Report, 2009.\n\n[14] J. Sethuraman. A constructive de\ufb01nition of dirichlet priors. 1994.\n[15] R. Thibaux and M. I. Jordan. Hierarchical beta processes and the Indian buffet process. In AISTATS,\n\n2007.\n\n[16] S. Williamson, P. Orbanz, and Z. Ghahramani. Dependent Indian buffet processes. In AISTATS, 2010.\n[17] M. Zhou, H. Chen, J. Paisley, L. Ren, G. Sapiro, and L. Carin. Non-parametric Bayesian dictionary\n\nlearning for sparse image representations. In NIPS, 2009.\n\n[18] M. Zhou, H. Yang, G. Sapiro, D. Dunson, and L. Carin. Dependent hierarchical beta process for image\n\ninterpolation and denoising. In AISTATS, 2011.\n\n9\n\n\f", "award": [], "sourceid": 595, "authors": [{"given_name": "Lu", "family_name": "Ren", "institution": null}, {"given_name": "Yingjian", "family_name": "Wang", "institution": null}, {"given_name": "Lawrence", "family_name": "Carin", "institution": null}, {"given_name": "David", "family_name": "Dunson", "institution": null}]}__