Posterior Collapse and Latent Variable Non-identifiability

Part of Advances in Neural Information Processing Systems 34 (NeurIPS 2021)

Bibtex Paper Reviews And Public Comment » Supplemental

Authors

Yixin Wang, David Blei, John P. Cunningham

Abstract

Variational autoencoders model high-dimensional data by positinglow-dimensional latent variables that are mapped through a flexibledistribution parametrized by a neural network. Unfortunately,variational autoencoders often suffer from posterior collapse: theposterior of the latent variables is equal to its prior, rendering thevariational autoencoder useless as a means to produce meaningfulrepresentations. Existing approaches to posterior collapse oftenattribute it to the use of neural networks or optimization issues dueto variational approximation. In this paper, we consider posteriorcollapse as a problem of latent variable non-identifiability. We provethat the posterior collapses if and only if the latent variables arenon-identifiable in the generative model. This fact implies thatposterior collapse is not a phenomenon specific to the use of flexibledistributions or approximate inference. Rather, it can occur inclassical probabilistic models even with exact inference, which wealso demonstrate. Based on these results, we propose a class oflatent-identifiable variational autoencoders, deep generative modelswhich enforce identifiability without sacrificing flexibility. Thismodel class resolves the problem of latent variablenon-identifiability by leveraging bijective Brenier maps andparameterizing them with input convex neural networks, without specialvariational inference objectives or optimization tricks. Acrosssynthetic and real datasets, latent-identifiable variationalautoencoders outperform existing methods in mitigating posteriorcollapse and providing meaningful representations of the data.