Learning in Compositional Hierarchies: Inducing the Structure of Objects from Data

Part of Advances in Neural Information Processing Systems 6 (NIPS 1993)

Bibtex Metadata Paper

Authors

Joachim Utans

Abstract

I propose a learning algorithm for learning hierarchical models for ob(cid:173) ject recognition. The model architecture is a compositional hierarchy that represents part-whole relationships: parts are described in the lo(cid:173) cal context of substructures of the object. The focus of this report is inducing the structure of learning hierarchical models from data, i.e. model prototypes from observed exemplars of an object. At each node in the hierarchy, a probability distribution governing its parameters must be learned. The connections between nodes reflects the structure of the object. The formulation of substructures is encouraged such that their parts become conditionally independent. The resulting model can be interpreted as a Bayesian Belief Network and also is in many respects similar to the stochastic visual grammar described by Mjolsness.