Part of Advances in Neural Information Processing Systems 11 (NIPS 1998)
Tommi Jaakkola, David Haussler
Generative probability models such as hidden ~larkov models pro(cid:173) vide a principled way of treating missing information and dealing with variable length sequences. On the other hand , discriminative methods such as support vector machines enable us to construct flexible decision boundaries and often result in classification per(cid:173) formance superior to that of the model based approaches. An ideal classifier should combine these two complementary approaches. In this paper, we develop a natural way of achieving this combina(cid:173) tion by deriving kernel functions for use in discriminative methods such as support vector machines from generative probability mod(cid:173) els. We provide a theoretical justification for this combination as well as demonstrate a substantial improvement in the classification performance in the context of D~A and protein sequence analysis.