Unsupervised Variational Bayesian Learning of Nonlinear Models

Part of Advances in Neural Information Processing Systems 17 (NIPS 2004)

Bibtex Metadata Paper

Authors

Antti Honkela, Harri Valpola

Abstract

In this paper we present a framework for using multi-layer per- ceptron (MLP) networks in nonlinear generative models trained by variational Bayesian learning. The nonlinearity is handled by linearizing it using a Gauss–Hermite quadrature at the hidden neu- rons. This yields an accurate approximation for cases of large pos- terior variance. The method can be used to derive nonlinear coun- terparts for linear algorithms such as factor analysis, independent component/factor analysis and state-space models. This is demon- strated with a nonlinear factor analysis experiment in which even 20 sources can be estimated from a real world speech data set.