Learning Joint Statistical Models for Audio-Visual Fusion and Segregation

Part of Advances in Neural Information Processing Systems 13 (NIPS 2000)

Bibtex Metadata Paper


John W. Fisher III, Trevor Darrell, William Freeman, Paul Viola


People can understand complex auditory and visual information, often using one to disambiguate the other. Automated analysis, even at a low(cid:173) level, faces severe challenges, including the lack of accurate statistical models for the signals, and their high-dimensionality and varied sam(cid:173) pling rates. Previous approaches [6] assumed simple parametric models for the joint distribution which, while tractable, cannot capture the com(cid:173) plex signal relationships. We learn the joint distribution of the visual and auditory signals using a non-parametric approach. First, we project the data into a maximally informative, low-dimensional subspace, suitable for density estimation. We then model the complicated stochastic rela(cid:173) tionships between the signals using a nonparametric density estimator. These learned densities allow processing across signal modalities. We demonstrate, on synthetic and real signals, localization in video of the face that is speaking in audio, and, conversely, audio enhancement of a particular speaker selected from the video.