Boosting Decision Trees

Part of Advances in Neural Information Processing Systems 8 (NIPS 1995)

Bibtex Metadata Paper

Authors

Harris Drucker, Corinna Cortes

Abstract

We introduce a constructive, incremental learning system for regression problems that models data by means of locally linear experts. In contrast to other approaches, the experts are trained independently and do not compete for data during learning. Only when a prediction for a query is required do the experts cooperate by blending their individual predic(cid:173) tions. Each expert is trained by minimizing a penalized local cross vali(cid:173) dation error using second order methods. In this way, an expert is able to find a local distance metric by adjusting the size and shape of the recep(cid:173) tive field in which its predictions are valid, and also to detect relevant in(cid:173) put features by adjusting its bias on the importance of individual input dimensions. We derive asymptotic results for our method. In a variety of simulations the properties of the algorithm are demonstrated with respect to interference, learning speed, prediction accuracy, feature detection, and task oriented incremental learning.