Connections Between Mirror Descent, Thompson Sampling and the Information Ratio[PDF] [BibTeX] [Supplemental] [Reviews] [Author Feedback] [Meta Review]
Conference Event Type: Poster
The information-theoretic analysis by Russo and Van Roy  in combination with minimax duality has proved a powerful tool for the analysis of online learning algorithms in full and partial information settings. In most applications there is a tantalising similarity to the classical analysis based on mirror descent. We make a formal connection, showing that the information-theoretic bounds in most applications are derived from existing techniques from online convex optimisation. Besides this, we improve best known regret guarantees for $k$-armed adversarial bandits, online linear optimisation on $\ell_p$-balls and bandits with graph feedback.