NeurIPS 2020

Approximate Heavily-Constrained Learning with Lagrange Multiplier Models

Meta Review

This paper proposes a new way of handling a large number of constraints in constraint optimization problems by compressing the (large number) of lagrange multipliers into a lower dimensional vector. This type of optimization is motivated by learning under fairness constraints, where a model may need to satisfy various conditions for various subgroups of the population. The authors prove optimality and generalization guarantees under certain assumptions and establish the practicality of their algorithm empirically. The submission received three knowledgeable and enthusiastic reviews recommending it as a clear accepeptance to neurips. Several reviewers however marked that some parts (abstract, intro, discussion of related work) are framed in a misleading way and gave recommendations for a more suitable presentation. The authors are asked to take these into account when preparing the camera ready version of their paper.