Sparse Overcomplete Latent Variable Decomposition of Counts Data

Part of Advances in Neural Information Processing Systems 20 (NIPS 2007)

Bibtex Metadata Paper Supplemental

Authors

Madhusudana Shashanka, Bhiksha Raj, Paris Smaragdis

Abstract

An important problem in many fields is the analysis of counts data to extract meaningful latent components. Methods like Probabilistic Latent Semantic Analysis (PLSA) and Latent Dirichlet Allocation (LDA) have been proposed for this purpose. However, they are limited in the number of components they can extract and also do not have a provision to control the expressiveness" of the extracted components. In this paper, we present a learning formulation to address these limitations by employing the notion of sparsity. We start with the PLSA framework and use an entropic prior in a maximum a posteriori formulation to enforce sparsity. We show that this allows the extraction of overcomplete sets of latent components which better characterize the data. We present experimental evidence of the utility of such representations."