Part of Advances in Neural Information Processing Systems 17 (NIPS 2004)
Sham M. Kakade, Andrew Ng
We present a competitive analysis of Bayesian learning algorithms in the online learning setting and show that many simple Bayesian algorithms (such as Gaussian linear regression and Bayesian logistic regression) per- form favorably when compared, in retrospect, to the single best model in the model class. The analysis does not assume that the Bayesian algo- rithms’ modeling assumptions are “correct,” and our bounds hold even if the data is adversarially chosen. For Gaussian linear regression (us- ing logloss), our error bounds are comparable to the best bounds in the online learning literature, and we also provide a lower bound showing that Gaussian linear regression is optimal in a certain worst case sense. We also give bounds for some widely used maximum a posteriori (MAP) estimation algorithms, including regularized logistic regression.