NeurIPS 2020

Variance Reduction via Accelerated Dual Averaging for Finite-Sum Optimization


Meta Review

The paper presents a novel way to combine SVRG and acceleration for finite-sum convex optimization: an algorithm based on dual averaging is presented together with a unified analysis which achieves an almost optimal rate for non-strongly convex and smooth objectives, and also provides improvements for strongly convex objectives. Previous algorithms for this problem are much harder to analyze and implement, suffer from worse convergence bounds, and are limited in applicability/optimality to either non-strongly-convex or strongly-convex objectives, not both. This makes the results of the paper significant. At the same time, the authors are encouraged to extend their experiments and revise their interpretation (e.g., take back on the claim that the new algorithm outperforms the baselines, as it is not the case in all the experiments, as pointed out by Reviewer 3).