NIPS Proceedingsβ

Finite Sample Convergence Rates of Zero-Order Stochastic Optimization Methods

Part of: Advances in Neural Information Processing Systems 25 (NIPS 2012)

[PDF] [BibTeX] [Supplemental]

Authors

Abstract

We consider derivative-free algorithms for stochastic optimization problems that use only noisy function values rather than gradients, analyzing their finite-sample convergence rates. We show that if pairs of function values are available, algorithms that use gradient estimates based on random perturbations suffer a factor of at most $\sqrt{\dim}$ in convergence rate over traditional stochastic gradient methods, where $\dim$ is the dimension of the problem. We complement our algorithmic development with information-theoretic lower bounds on the minimax convergence rate of such problems, which show that our bounds are sharp with respect to all problem-dependent quantities: they cannot be improved by more than constant factors.