NIPS Proceedingsβ

Zeyuan Allen-Zhu

10 Papers

  • Byzantine Stochastic Gradient Descent (2018)
  • How To Make the Gradients Small Stochastically: Even Faster Convex and Nonconvex SGD (2018)
  • Is Q-Learning Provably Efficient? (2018)
  • Natasha 2: Faster Non-Convex Optimization Than SGD (2018)
  • NEON2: Finding Local Minima via First-Order Oracles (2018)
  • The Lingering of Gradients: How to Reuse Gradients Over Time (2018)
  • Linear Convergence of a Frank-Wolfe Type Algorithm over Trace-Norm Balls (2017)
  • Exploiting the Structure: Stochastic Gradient Methods Using Raw Clusters (2016)
  • LazySVD: Even Faster SVD Decomposition Yet Without Agonizing Pain (2016)
  • Optimal Black-Box Reductions Between Optimization Objectives (2016)