#### Authors

Brendan Mcmahan, Matthew Streeter

#### Abstract

<p>Some of the most compelling applications of online convex optimization, including online prediction and classification, are unconstrained: the natural feasible set is R^n. Existing algorithms fail to achieve sub-linear regret in this setting unless constraints on the comparator point x* are known in advance. We present an algorithm that, without such prior knowledge, offers near-optimal regret bounds with respect to <em>any</em> choice of x<em>. In particular, regret with respect to x</em> = 0 is <em>constant</em>. We then prove lower bounds showing that our algorithm's guarantees are optimal in this setting up to constant factors.</p>