Bundle Methods for Machine Learning

Part of Advances in Neural Information Processing Systems 20 (NIPS 2007)

Bibtex Metadata Paper

Authors

Quoc Le, Alex Smola, S.v.n. Vishwanathan

Abstract

We present a globally convergent method for regularized risk minimization prob- lems. Our method applies to Support Vector estimation, regression, Gaussian Processes, and any other regularized risk minimization setting which leads to a convex optimization problem. SVMPerf can be shown to be a special case of our approach. In addition to the uniļ¬ed framework we present tight convergence bounds, which show that our algorithm converges in O(1/) steps to  precision for general convex problems and in O(log(1/)) steps for continuously differen- tiable problems. We demonstrate in experiments the performance of our approach.