NeurIPS 2020

Decentralized Accelerated Proximal Gradient Descent


Meta Review

The paper gives an accelerated gradient method for decentralized optimization on composite objectives. It achieves this by mimicking centralized accelerated proximal gradient descent. Slight concerns remained about the level of novelty over the Mudag algorithm, which should be expanded in the discussion more precisely, as well as the (theory) requirement of K>1 communications after every step and the not yet fully explained dependence of K on the graph parameter. We expect the authors to incorporate the feedback and improvement suggestions from the 4 reviews in the camera ready version.