Part of Advances in Neural Information Processing Systems 14 (NIPS 2001)
Peter Sykacek, Stephen J. Roberts
This paper proposes an approach to classification of adjacent segments of a time series as being either of classes. We use a hierarchical model that consists of a feature extraction stage and a generative classifier which is built on top of these features. Such two stage approaches are often used in signal and image processing. The novel part of our work is that we link these stages probabilistically by using a latent feature space. To use one joint model is a Bayesian requirement, which has the advantage to fuse information according to its certainty. The classifier is implemented as hidden Markov model with Gaussian and Multinomial observation distributions defined on a suitably chosen representation of autoregressive models. The Markov dependency is mo- tivated by the assumption that successive classifications will be corre- lated. Inference is done with Markov chain Monte Carlo (MCMC) tech- niques. We apply the proposed approach to synthetic data and to classi- fication of EEG that was recorded while the subjects performed different cognitive tasks. All experiments show that using a latent feature space results in a significant improvement in generalization accuracy. Hence we expect that this idea generalizes well to other hierarchical models.