NeurIPS 2020

Learning to Approximate a Bregman Divergence

Meta Review

After substantial discussions and an increase of the overall score, the reviewers converged to an overall positive agreement on the paper. On the positive side, the reviewers recognise that the paper is of interest to the community (R2), the algorithm is backed up by experiments and theory (R2), and the rebuttal made a reasonable job of explaining notations ambiguity in generalization bounds (R4). On the negative side however, the reviewers still point a gap between theory and experiments (R2) and maybe a lack of common ground with respect to classical metric learning papers (R2), non-intuitive dependences on some parameters (R3). In the discussions, R1 rallied to R2 in the fact that there is somewhat a gap between theory and experiments. The authors have done a substantial job in explaining notations in their rebuttal and I personally thank them for that; it is strongly recommended they finely polish the camera ready version of the paper, not just with respect to those notations, but also to make clear the gap that needs to be narrowed, maybe in a future work section.