NIPS Proceedingsβ

No-Regret Algorithms for Unconstrained Online Convex Optimization

Part of: Advances in Neural Information Processing Systems 25 (NIPS 2012)

[PDF] [BibTeX] [Supplemental]

Authors

Abstract

Some of the most compelling applications of online convex optimization, including online prediction and classification, are unconstrained: the natural feasible set is R^n. Existing algorithms fail to achieve sub-linear regret in this setting unless constraints on the comparator point x* are known in advance. We present an algorithm that, without such prior knowledge, offers near-optimal regret bounds with respect to _any_ choice of x*. In particular, regret with respect to x* = 0 is _constant_. We then prove lower bounds showing that our algorithm's guarantees are optimal in this setting up to constant factors.