This work provides general results in distributionally-robust optimization for any integral probability metric, as well as some discussion of the consequences of these results in f-GANs. The former contribution seems to be a solid improvement to the literature, while the latter is more tentative but might both be relevant to some uses, as well as providing a perspective on the importance of regularization in GAN settings. The paper as submitted had significant clarity issues in terms of the consequences of the work (particularly the GAN section, but the DRO section as well). The author response did a lot to clarify this, in particular bringing up the connection to adversarial robustness. (Although general, it does sometimes reduce to not-too-surprising outcomes; e.g., in the Wasserstein case, it reduces to simply bounding standard adversarial robustness via the Lipschitz constant.) I think, then, that the paper does meet the bar for inclusion at this NeurIPS. I strongly recommend, however, that you make changes corresponding to the discussion in the rebuttal to the final version of the paper, which should make the paper much more accessible and hence useful to the community. [In response to your email, which was sent when I no longer had the ability to email you back: indeed, the reviewers had not updated their reviews at the time the updated reviews were released. I did, though, carefully read your (thorough) author response and take it into account in decision-making.]