NIPS Proceedingsβ

Elad Hazan

18 Papers

  • Linear Convergence of a Frank-Wolfe Type Algorithm over Trace-Norm Balls (2017)
  • Online Learning of Linear Dynamical Systems (2017)
  • A Non-generative Framework and Convex Relaxations for Unsupervised Learning (2016)
  • Optimal Black-Box Reductions Between Optimization Objectives (2016)
  • The Limits of Learning with Missing Data (2016)
  • Beyond Convexity: Stochastic Quasi-Convex Optimization (2015)
  • Online Gradient Boosting (2015)
  • Online Learning for Adversaries with Memory: Price of Past Mistakes (2015)
  • Bandit Convex Optimization: Towards Tight Bounds (2014)
  • The Blinded Bandit: Learning with Adaptive Feedback (2014)
  • A Polylog Pivot Steps Simplex Algorithm for Classification (2012)
  • Approximating Semidefinite Programs in Sublinear Time (2011)
  • Beating SGD: Learning SVMs in Sublinear Time (2011)
  • Newtron: an Efficient Bandit algorithm for Online Multiclass Prediction (2011)
  • Beyond Convexity: Online Submodular Minimization (2009)
  • On Stochastic and Worst-case Models for Investing (2009)
  • Adaptive Online Gradient Descent (2007)
  • Computational Equivalence of Fixed Points and No Regret Algorithms, and Convergence to Equilibria (2007)