No-Regret Algorithms for Unconstrained Online Convex Optimization

Part of Advances in Neural Information Processing Systems 25 (NIPS 2012)

Bibtex Metadata Paper Supplemental

Authors

Brendan Mcmahan, Matthew Streeter

Abstract

Some of the most compelling applications of online convex optimization, including online prediction and classification, are unconstrained: the natural feasible set is R^n. Existing algorithms fail to achieve sub-linear regret in this setting unless constraints on the comparator point x* are known in advance. We present an algorithm that, without such prior knowledge, offers near-optimal regret bounds with respect to any choice of x. In particular, regret with respect to x = 0 is constant. We then prove lower bounds showing that our algorithm's guarantees are optimal in this setting up to constant factors.