NeurIPS 2020

Robust Pre-Training by Adversarial Contrastive Learning


Meta Review

This paper focuses on adversarial training. The proposal is to incorporate adversarial training into the pre-training step, which makes the pre-training techniques even more robustness-aware. This can be seen as an extension of SimCLR (with the incorporation of adversarial training). The philosophy behind sounds quite interesting to me, namely, introducing adversarial robustness into self-supervised learning and formulating the contrastive task. This philosophy leads to a novel algorithm design I have never seen, i.e., Adversarial-to-Adversarial (A2A), Adversarial-to-Standard (A2S), and Dual Stream (DS). The clarity and novelty are clearly above the bar of NeurIPS. While the reviewers had some concerns on the significance, the authors did a particularly good job in their rebuttal. Thus, most of us have agreed to accept this paper for publication! Please carefully address R3' comments in the final version, namely, revising less of the imprecise presentation in the paper.