Combining Classifiers Using Correspondence Analysis

Part of Advances in Neural Information Processing Systems 10 (NIPS 1997)

Bibtex Metadata Paper

Authors

Christopher Merz

Abstract

Several effective methods for improving the performance of a sin(cid:173) gle learning algorithm have been developed recently. The general approach is to create a set of learned models by repeatedly apply(cid:173) ing the algorithm to different versions of the training data, and then combine the learned models' predictions according to a pre(cid:173) scribed voting scheme. Little work has been done in combining the predictions of a collection of models generated by many learning algorithms having different representation and/or search strategies. This paper describes a method which uses the strategies of stack(cid:173) ing and correspondence analysis to model the relationship between the learning examples and the way in which they are classified by a collection of learned models. A nearest neighbor method is then applied within the resulting representation to classify previously unseen examples. The new algorithm consistently performs as well or better than other combining techniques on a suite of data sets.