NIPS Proceedingsβ

Mind the Duality Gap: Logarithmic regret algorithms for online optimization

Part of: Advances in Neural Information Processing Systems 21 (NIPS 2008)

[PDF] [BibTeX] [Supplemental]



We describe a primal-dual framework for the design and analysis of online strongly convex optimization algorithms. Our framework yields the tightest known logarithmic regret bounds for Follow-The-Leader and for the gradient descent algorithm proposed in HazanKaKaAg06. We then show that one can interpolate between these two extreme cases. In particular, we derive a new algorithm that shares the computational simplicity of gradient descent but achieves lower regret in many practical situations. Finally, we further extend our framework for generalized strongly convex functions.