NIPS Proceedingsβ

Jason D. Lee

14 Papers

  • Adding One Neuron Can Eliminate All Bad Local Minima (2018)
  • Algorithmic Regularization in Learning Deep Homogeneous Models: Layers are Automatically Balanced (2018)
  • Implicit Bias of Gradient Descent on Linear Convolutional Networks (2018)
  • On the Convergence and Robustness of Training GANs with Regularized Optimal Transport (2018)
  • Provably Correct Automatic Sub-Differentiation for Qualified Programs (2018)
  • Gradient Descent Can Take Exponential Time to Escape Saddle Points (2017)
  • Matrix Completion has No Spurious Local Minimum (2016)
  • Evaluating the statistical significance of biclusters (2015)
  • Exact Post Model Selection Inference for Marginal Screening (2014)
  • Scalable Methods for Nonnegative Matrix Factorizations of Near-separable Tall-and-skinny Matrices (2014)
  • On model selection consistency of penalized M-estimators: a geometric theory (2013)
  • Using multiple samples to learn mixture models (2013)
  • Proximal Newton-type methods for convex optimization (2012)
  • Practical Large-Scale Optimization for Max-norm Regularization (2010)