Fast Learning by Bounding Likelihoods in Sigmoid Type Belief Networks

Part of Advances in Neural Information Processing Systems 8 (NIPS 1995)

Bibtex Metadata Paper

Authors

Tommi Jaakkola, Lawrence Saul, Michael Jordan

Abstract

Sigmoid type belief networks, a class of probabilistic neural net(cid:173) works, provide a natural framework for compactly representing probabilistic information in a variety of unsupervised and super(cid:173) vised learning problems. Often the parameters used in these net(cid:173) works need to be learned from examples. Unfortunately, estimat(cid:173) ing the parameters via exact probabilistic calculations (i.e, the EM-algorithm) is intractable even for networks with fairly small numbers of hidden units. We propose to avoid the infeasibility of the E step by bounding likelihoods instead of computing them ex(cid:173) actly. We introduce extended and complementary representations for these networks and show that the estimation of the network parameters can be made fast (reduced to quadratic optimization) by performing the estimation in either of the alternative domains. The complementary networks can be used for continuous density estimation as well.