NIPS Proceedingsβ

Yuanzhi Li

15 Papers

  • Can SGD Learn Recurrent Neural Networks with Provable Generalization? (2019)
  • Complexity of Highly Parallel Non-Smooth Convex Optimization (2019)
  • Learning and Generalization in Overparameterized Neural Networks, Going Beyond Two Layers (2019)
  • On the Convergence Rate of Training Recurrent Neural Networks (2019)
  • Towards Explaining the Regularization Effect of Initial Large Learning Rate in Training Neural Networks (2019)
  • What Can ResNet Learn Efficiently, Going Beyond Kernels? (2019)
  • Learning Overparameterized Neural Networks via Stochastic Gradient Descent on Structured Data (2018)
  • NEON2: Finding Local Minima via First-Order Oracles (2018)
  • Online Improper Learning with an Approximation Oracle (2018)
  • Convergence Analysis of Two-layer Neural Networks with ReLU Activation (2017)
  • Linear Convergence of a Frank-Wolfe Type Algorithm over Trace-Norm Balls (2017)
  • Algorithms and matching lower bounds for approximately-convex optimization (2016)
  • Approximate maximum entropy principles via Goemans-Williamson with applications to provable variational methods (2016)
  • LazySVD: Even Faster SVD Decomposition Yet Without Agonizing Pain (2016)
  • Recovery Guarantee of Non-negative Matrix Factorization via Alternating Updates (2016)