Part of Advances in Neural Information Processing Systems 14 (NIPS 2001)
Brendan J. Frey, Anitha Kannan, Nebojsa Jojic
Factor analysis and principal components analysis can be used to model linear relationships between observed variables and linearly map high-dimensional data to a lower-dimensional hidden space. In factor analysis, the observations are modeled as a linear com(cid:173) bination of normally distributed hidden variables. We describe a nonlinear generalization of factor analysis, called "product analy(cid:173) sis", that models the observed variables as a linear combination of products of normally distributed hidden variables. Just as fac(cid:173) tor analysis can be viewed as unsupervised linear regression on unobserved, normally distributed hidden variables, product anal(cid:173) ysis can be viewed as unsupervised linear regression on products of unobserved, normally distributed hidden variables. The map(cid:173) ping between the data and the hidden space is nonlinear, so we use an approximate variational technique for inference and learn(cid:173) ing. Since product analysis is a generalization of factor analysis, product analysis always finds a higher data likelihood than factor analysis. We give results on pattern recognition and illumination(cid:173) invariant image clustering.