Part of Advances in Neural Information Processing Systems 11 (NIPS 1998)
Thore Graepel, Ralf Herbrich, Peter Bollmann-Sdorra, Klaus Obermayer
We investigate the problem of learning a classification task on data represented in terms of their pairwise proximities. This representa(cid:173) tion does not refer to an explicit feature representation of the data items and is thus more general than the standard approach of us(cid:173) ing Euclidean feature vectors, from which pairwise proximities can always be calculated. Our first approach is based on a combined linear embedding and classification procedure resulting in an ex(cid:173) tension of the Optimal Hyperplane algorithm to pseudo-Euclidean data. As an alternative we present another approach based on a linear threshold model in the proximity values themselves, which is optimized using Structural Risk Minimization. We show that prior knowledge about the problem can be incorporated by the choice of distance measures and examine different metrics W.r.t. their gener(cid:173) alization. Finally, the algorithms are successfully applied to protein structure data and to data from the cat's cerebral cortex. They show better performance than K-nearest-neighbor classification.