NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:1029
Title:On the Ineffectiveness of Variance Reduced Optimization for Deep Learning


		
The reviewers thought this paper provided an interesting analysis of the lack of variance reduction methods for deep nets. They however raised several concerns about the lack of details about the experimental setup, especially since such details can affect the outcome of the paper. This is true that, for such papers, there are always more experiments that the authors can run to broaden or strengthen their statements. That being said, I decided to accept the paper for the following reasons: - Requiring a large number of experiments for a paper to be accepted to NeurIPS will preclude those with limited compute resources to have papers published. - Papers of this type, such as "The Marginal Value of Adaptive Gradient Methods in Machine Learning" of Wilson et al., inevitable lead to authors trying to reproduce the results or trying different setup. Thus, accepting this paper will provide a basis for experiments for other researchers. - The authors committed in their rebuttal to include additional results, run more experiments, and release their code on GitHub. It is possible that a future work, either from the authors or from others, will reach a different conclusion by running some of the experiments asked for by the reviewers. However, even if this were to happen, I believe accepting this paper is a good way to speed up such experiments. To the authors: this paper is accepted under the agreement that you will spend a significant amount of time ensuring that all experiments are done to the highest standard and with all details present in the paper.