Summary and Contributions: This paper proposes a robust aggregration function, Soft Medoid, which is a differenitable generalization of the Medoid. The Soft Medoid function can be used to remarkably improve adversarial robustness of the graph neural networks. I think this is an interesting work.
Strengths: Based on the fact that the commonly used statistics in the aggregation functions of the graph neural network is not robust, e.g., average, sum, max, etc., then the authors of this paper proposed a novel soft-Medoid aggregation function, which enable robust statistics and differentiable. The soft-Medoid aggregation can be applied to improve adversarial robustness of GNNs. The author performed a decent amount of experiments to verify the advantage of using the proposed aggregation function. The paper is well-written and is a good fit to the NeurIPS community.
Weaknesses: How the robust certification is performed? More details are needed here. Line 72: ill-suited suited -> ill-suited
Correctness: The proposed method seems correct, and so is the empirical study.
Clarity: The paper is well written.
Relation to Prior Work: This paper did a great job in the literature review.
Additional Feedback: Please address the Weaknesses section in the revision. --- Post-rebuttal After carefully read the paper, I fear the theoretical analysis and algorithms do not really match. This should be discussed.
Summary and Contributions: The paper proposed a robust aggregation function, Soft Medoid, to replace the sum/mean operation used in conventional GCN. This operation is fully differentiable and yield better robustness compare with other defense methods.
Strengths: The method is novel and easy to follow. The theoretical analysis is comprehensive and convincing.
Weaknesses: I have some concerns about this paper: 1. It seems that the final Soft Medoid is just a softmax function with a controllable variable T in Eq.2. Although a plenty of proof and analysis are provided in the next sections, I still wonder whether it is enough to obtain robustness by this minor modification. 2. The assumption of the paper is too strong. They claimed that the reason for the vulnerability of conventional GNN is the aggregation function, such as a sum or mean, which might be distorted arbitrarily by a single outlier. This does not very make sense since the weights and activation functions also take charge of the GNN and may work as calibrator to these outliers. 3. The evaluation matrix is robustness certificates which is not straightforward enough. Why do not show the attack success rate directly from [3–7, 9, 12, 54]. 4. From Table 1, it seems like nature accuracy also decreases by the proposed Soft Medoid. What's the trade-off here compare with other defense methods? Minor concerns: The time cost is missing. The Cora ML and Citeseer are too small. Are there any results on large-scale datasets such as Pubmed? =================================== After read the rebutall, the author addressed my question well and I decided to rasie my score to 6.
Correctness: Maybe correct. I think it needs more experimental results to support the claims.
Clarity: The paper is well written.
Relation to Prior Work: Yes.
Summary and Contributions: This paper proposed a new robust aggregation function for graph neural networks to defend against graph-structure based adversarial attacks, which can tolerate adversarial attacks with higher percentage of adversarial edges, meanwhile the function is fully differentiable and well suited for end-to-end deep learning. Experiments are conducted to justify the superior of the proposed aggregation method.
Strengths: 1. The proposed aggregation is simple and easy to understand with just one temperature hyper-parameter. 2. Robust analysis and theoretical certificate of the proposed aggregation on the breakdown point is given, which enjoys a larger value (0.5) compared to that (0) of sum and average etc. 3. The author conducted extensive experiments to validate the effect of soft Medoid, on different architectures and three different common used datasets, using hyper parameter searching and compared with different SOTA baselines against different structure based attacks, also consider graphs where node exhibits different degree distribution, hence I regard the results to be fairly convincing. 4. It's also pointed in the paper that the increasing defense against structure based attacks come at a cost of deceasing defense against attribute based adversarial attacks, some analysis is given. This paper helps to strength the robustness of GNN against structure based attacks by changing the aggregation function, which also gives some motivations on future research.
Weaknesses: 1. The time cost of the proposed aggregation function in practice is not given, though the worse case complexity is somehow analyzed, but practitioners may care more about the time, i.e., what’s the time cost to finish one epoch training/(or one inference) compared to the time of vanilla models like GCN? also what's the time cost compare to other defense? 2. Several previous attacks were used to evaluate the robustness of the aggregation, however, an defense-aware adversarial attack is not considered, i.e., knowing the change of the aggregation and its shortcomings, such as the larger bias for smaller perturbations, the adversary can potentially change the attack methodology to bypass the defense, in this attack and defense war, when a new defense is proposed, the author should consider an adaptive adversarial attack to make the defense strong and meaningful. 3. Though the (finite-sample) breakdown point certificate is given, but it's conditioned on when the result of the estimator can be arbitrarily placed, in practice, an adversary typically do not need to get the results arbitrarily place, but just to some degree, the authors should make efforts to defend this kind of adversary, also compare the proposed method against different baselines. 4. Three common used datasets (or graphs) are considered, but the size of them are basically in the same magnitude, the authors should consider larger graphs with more nodes, and report the defense result with computation cost compared to different baselines. 5. In line 158, it's mentioned that soft medoid comes with the risk of a higher bias for small perturbations and high epsilon, the author should take this effect into account when conducting experiments, to better show the shortcomings out to readers. 6. As found in the paper, this increased robustness against structure-based attacks comes with a cost of decreasing robustness on attribute attacks, the author should make it clear how much robustness lost to attribute attacks by using soft medoid aggregation, otherwise, the method just makes the model robust to structure attacks but super vulnerable to attribute inference attacks.
Correctness: Theoretical claims are justified by logical analysis, empirical methodology is also correct.
Clarity: Line 72, ill-suited suited, double suited
Relation to Prior Work: The author generalize the medoid function if robust statistics to a differentiable function soft medoid function, also the differences to previous defenses are analyzed and compared.
Additional Feedback: I have read the author feedback, it addressed some of my concerns, such as time cost in Table B and defense results against several attacks in Table A, the method is straightforward and simple, proposed certificate is new and interesting, hence I will still vote accept for this paper. ----------------------------- Most are summarized above, one more advice is: can this aggregation method be applied in other areas to defend adversaries, such as in federated learning, when a artificially generated participant joins, can this aggregation makes the model robust to this method?