NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:2038
Title:Painless Stochastic Gradient: Interpolation, Line-Search, and Convergence Rates

This paper brings a classic idea into the present and makes progress on a vexing problem with SGD --- setting the step size. The authors provide theoretical evidence as well as emipirical evidence that their method is useful. The assumptions may be somewhat limiting; one version requires strong convexity and when that is relaxed, other assumptions must be made. But this work points to a path that may be useful in the long-run. An important way of contribution in ML is bridging fields; that could mean bringing in ideas that are state-of-the-art in other fields or it could mean revisiting classic ideas in new ways. Indeed, SGD itself is a revisitation of a classic idea that was impractical in its own time, but found wide applications when data sets grew large. This paper is a good contribution because it bridges fields and provides rigorous evidence to support their improvements. I'm strongly in favor of acceptance.