This paper proposed a new robust aggregation function for graph neural networks to defend against adversarial attacks. The questions raised by the reviewers have been addressed properly in the rebuttal. However, one of the reviewers found that the theoretical analysis provided in this paper does not really prove the "adversarial robustness" of the proposed aggregation function. More specifically, the analysis only shows that an attacker is harder to turn the aggregated results into +-\infty, while for adversarial robustness it is necessary to show "the aggregated results won't change significantly with small input perturbation". AC and other reviewers agree with this point but think this paper still has enough novelty and empirical contributions. We encourage the authors to address this concern in the revised version.