NeurIPS 2020
### Graph Contrastive Learning with Augmentations

### Meta Review

The author proposes a contrastive learning framework for graph embedding. It first generates graph samples by applying several graph augmentation strategies (node trapping, edge perturbation, attribute masking and sub-graphing) to the original graph, and then maximize the agreements between the graph embeddings of the same graph under different argumentations. In other words, it aims to learn the perturbation-invariant embedding of graphs.
Pro:
1. Although contrastive learning via data argumentation has been studied with other data types (e.g., images), adapting such idea to graphs-structured data is non-trivial and hence a novel contribution in this paper.
2. The empirical evaluation is extensive, and the analysis is insightful.
3. The paper is well written and easy to follow
Cons:
1. A comparison of node- or edge-level embedding methods would increase the completeness of experiments.
2. More in-depth (theoretical) analysis/comparison of different argumentation strategies and contrastive losses would be a plus.
3. Two citations on contrastive learning framework are missing [1][2].
[1] Kaiming He, et al., Momentum Contrast for Unsupervised Visual Representation Learning, CVPR 2020.
[2] Herzig, et al., Learning Canonical Representations for Scene Graph to Image Generation, ECCV 2020.