Fisher Scoring and a Mixture of Modes Approach for Approximate Inference and Learning in Nonlinear State Space Models

Part of Advances in Neural Information Processing Systems 11 (NIPS 1998)

Bibtex Metadata Paper

Authors

Thomas Briegel, Volker Tresp

Abstract

We present Monte-Carlo generalized EM equations for learning in non(cid:173) linear state space models. The difficulties lie in the Monte-Carlo E-step which consists of sampling from the posterior distribution of the hidden variables given the observations. The new idea presented in this paper is to generate samples from a Gaussian approximation to the true posterior from which it is easy to obtain independent samples. The parameters of the Gaussian approximation are either derived from the extended Kalman filter or the Fisher scoring algorithm. In case the posterior density is mul(cid:173) timodal we propose to approximate the posterior by a sum of Gaussians (mixture of modes approach). We show that sampling from the approxi(cid:173) mate posterior densities obtained by the above algorithms leads to better models than using point estimates for the hidden states. In our exper(cid:173) iment, the Fisher scoring algorithm obtained a better approximation of the posterior mode than the EKF. For a multimodal distribution, the mix(cid:173) ture of modes approach gave superior results.