Part of Advances in Neural Information Processing Systems 32 (NeurIPS 2019)
Yue Sun, Nicolas Flammarion, Maryam Fazel
We consider minimizing a nonconvex, smooth function f on a Riemannian manifold M. We show that a perturbed version of the gradient descent algorithm converges to a second-order stationary point for this problem (and hence is able to escape saddle points on the manifold). While the unconstrained problem is well-studied, our result is the first to prove such a rate for nonconvex, manifold-constrained problems. The rate of convergence depends as 1/ϵ2 on the accuracy ϵ, which matches a rate known only for unconstrained smooth minimization. The convergence rate also has a polynomial dependence on the parameters denoting the curvature of the manifold and the smoothness of the function.