Part of Advances in Neural Information Processing Systems 20 (NIPS 2007)
Andrew Naish-guzman, Sean Holden
We present an efﬁcient generalization of the sparse pseudo-input Gaussian pro- cess (SPGP) model developed by Snelson and Ghahramani , applying it to binary classiﬁcation problems. By taking advantage of the SPGP prior covari- ance structure, we derive a numerically stable algorithm with O(N M 2) training complexity—asymptotically the same as related sparse methods such as the in- formative vector machine , but which more faithfully represents the posterior. We present experimental results for several benchmark problems showing that in many cases this allows an exceptional degree of sparsity without compromis- ing accuracy. Following , we locate pseudo-inputs by gradient ascent on the marginal likelihood, but exhibit occasions when this is likely to fail, for which we suggest alternative solutions.