Summary and Contributions: This work studies a restricted black-box attack on GNN: only nodes out of the league of highest degrees can be modified, and they are modified by a constant vector. The attack assumes little knowledge of the node classification model and mostly relies on the graph structure. This is a reasonable and novel setting. Based on a random-walk analysis and an assumption of the label distribution, the authors propose a simple strategy RWCS that attacks nodes with the highest importance scores, defined as the column sum of a certain power of the transition matrix. This strategy fails to continue to increase the mis-classification rate when the perturbation strength becomes larger, and hence the authors further propose a greedy-correction RWCS, based on an analysis revealing that the mean vulnerable function is approximately submodular under a homophily assumption. The GC-RWCS strategy is shown to outperform one random and a few centrality baselines on three benchmark data sets and two GNNs.
Strengths: - The attack setting under study is carefully explained. That one may not attack high-degree nodes (celebrity nodes) appears sensible. - The proposed methods are simple to execute. - The proposed methods are motivated and backed by theoretical analysis. - This work is a valuable contribution to the GNN attack literature.
Weaknesses: - The execution of the method is not truly black-box; it requires the gradient information to compute the perturbation vector. - That the perturbation vector is constant appears to be limited. It is possible to develop a more severe attack when the perturbations are more finely curated.
Correctness: All sound correct.
Clarity: This paper is very well written.
Relation to Prior Work: The authors explain well the novelty of the work in context.
Reproducibility: Yes
Additional Feedback: - In Eqn (1), W should be on the right of H. - The maximum node degree m does not seem to appear in the analysis. Could the authors elaborate how m plays a role in theory? ----------- Post rebuttal: I have read the response and thank the authors for their efforts.
Summary and Contributions: The paper studies a new type of black box adversarial attack for the node classification task, assuming that the attacker has no access to the model parameters and only on the input. Furthermore, a constraint is included to select the subset of nodes to attack based on their importance which is deduced by a combination of random walk and structural information on the network and graph. Alteration at the node attributes level is performed. The experimental evaluation investigates the effect of these attacks at the loss and misclassification rate level, as the perturbation strength varies
Strengths: - The application and relevance are potentially impactful, ultimately helping to understand the robustness of GNN models. - The method is well described, with a clear notation while the claims are theoretically consolidated
Weaknesses: - The experimental setup is rather limited with respect to the variety of GNN approaches tested. The paper only looks at the GCN model. Given that the effectiveness of the proposed method mainly relies on the experimental results, I'd recommend to provide a more extensive comparison. - In Figure 1 the results for PageRank are missing - The parameter k determining the neighbour is a key parameter to the selection strategy. Possibly, the authors could investigate the effect of varying it in more detail.
Correctness: The theoretical and empirical analysis are both correct. Nevertheless, I would extend the empirical comparison as it is crucial to establish the relevance and contribution of the study.
Clarity: I find the paper very clear and easy to read
Relation to Prior Work: RWCS seem to be quite similar to PageRank, maybe the specific contribution of the proposed approach could be better highlighted
Reproducibility: Yes
Additional Feedback: Thank you to the authors for their effort in the rebuttal. I updated my recommendation.
Summary and Contributions: This work presents an adversarial attack on Graph Neural Networks. The attack is designed for scenarios limited both in the number of accessible nodes and by the black-box nature of the attack. Despite these restrictions, the work presents indications that the attack is competitive and outperforms other attack strategies in similar settings.
Strengths: The paper is well organized and concise. The assumed threat model incorporates a more realistic perspective than previous works. It presents both theoretical arguments and experimental verification in support of its major claims.
Weaknesses: - The theoretical foundation builds off a number of assumptions, specifically assumption 5, that may limit its applicability in real world scenarios. - The work claims the attack is limited by the adversaries limited access to the attack set (the vulnerable nodes of the graph). But there is no discussion on how this attack set is generated, its properties, or its effect on the proposed methodology. - The three baseline attacks used in the experimental results are not sufficiently introduced. - Further, it is not clear why the chosen baselines are more sufficient comparisons than recent works like [1] and [3]. - While the results indicate that in many cases the methodology decreases the performance of the model, in many of the scenarios there is only a minor improvement over the baseline comparisons. - Lambda, the magnitude of perturbation on each node, is used as a measure of the strength of the attack, but intuitively it seems like J, the number of node perturbed, would be a more accurate judge of the attack strength.
Correctness: Yes.
Clarity: Yes
Relation to Prior Work: Yes.
Reproducibility: Yes
Additional Feedback: The effectiveness of the attacks seems to be weaker when applied to GCN over the other two models. Do the authors have any intuitive understanding for why this occurs?
Summary and Contributions: This paper considers the problem of black-box attacks on graph neural networks under a realistic constraint that attackers have access to only a subset of nodes.
Strengths: The paper is theoretically grounded with good empirical results.
Weaknesses: Readability of the paper can be improved.
Correctness: yes
Clarity: yes
Relation to Prior Work: yes
Reproducibility: Yes
Additional Feedback: Thank you for your effort in preparing this rebuttal. I will keep my score and recommend acceptance. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ The authors consider a novel setup of black-box attacks for GNNs with a constraint of limited node access. The approach seems sounds and empirical results are reasonable. It seems to me that implication of this work is to highlight the extent of vulnerability of GNNs. I could not see how the proposed approach for example can be used to design better defense schemes (e.g., adversarial training). Being said that, I feel highlighting this vulnerability is still important. I would suggest authors to also look into scenarios where feature perturbation matrix \tau is optimized (using existing schemes). Further, adding a discussion on the implication of their findings and approach on adversarial defense would be useful.