NeurIPS 2020
### Understanding spiking networks through convex optimization

### Meta Review

The reviewers expressed some mixed opinions about this work: overall, the idea of interpreting LIF networks as solving quadratic programs (i.e. quadratic objectives with linear constraints) is quite intriguing, but some aspects of the story could be improved. For example, as R5 noted, the synaptic learning rules currently focus on the feedforward weights rather than the recurrent weights.
Moreover, I would add that the recurrent weights are subject to relatively strong low-rank assumptions (specifically, GD is rank M, the dimensionality of the variables being optimized, rather than N, the number of neurons/constraints). This property further implies that the diagonal of the recurrent weights, which determine the reset voltage, are also highly constrained. I think this assumption and its implications warrant further discussion. Finally, you say many times that G_i \propto D_i when really you should say G_i \propto (D^T)_i, since on is M \times N and the other is N \times M.
Overall, I found the connection between spiking networks and quadratic programming quite interesting, and encourage the authors to address these reviewers' concerns as much as possible for the camera ready.