Posterior Consistency of the Silverman g-prior in Bayesian Model Choice

Part of Advances in Neural Information Processing Systems 21 (NIPS 2008)

Bibtex Metadata Paper

Authors

Zhihua Zhang, Michael Jordan, Dit-Yan Yeung

Abstract

Kernel supervised learning methods can be unified by utilizing the tools from regularization theory. The duality between regularization and prior leads to interpreting regularization methods in terms of maximum a posteriori estimation and has motivated Bayesian interpretations of kernel methods. In this paper we pursue a Bayesian interpretation of sparsity in the kernel setting by making use of a mixture of a point-mass distribution and prior that we refer to as ``Silverman's g-prior.'' We provide a theoretical analysis of the posterior consistency of a Bayesian model choice procedure based on this prior. We also establish the asymptotic relationship between this procedure and the Bayesian information criterion.