__ Summary and Contributions__: This work studies a restricted black-box attack on GNN: only nodes out of the league of highest degrees can be modified, and they are modified by a constant vector. The attack assumes little knowledge of the node classification model and mostly relies on the graph structure. This is a reasonable and novel setting.
Based on a random-walk analysis and an assumption of the label distribution, the authors propose a simple strategy RWCS that attacks nodes with the highest importance scores, defined as the column sum of a certain power of the transition matrix. This strategy fails to continue to increase the mis-classification rate when the perturbation strength becomes larger, and hence the authors further propose a greedy-correction RWCS, based on an analysis revealing that the mean vulnerable function is approximately submodular under a homophily assumption. The GC-RWCS strategy is shown to outperform one random and a few centrality baselines on three benchmark data sets and two GNNs.

__ Strengths__: - The attack setting under study is carefully explained. That one may not attack high-degree nodes (celebrity nodes) appears sensible.
- The proposed methods are simple to execute.
- The proposed methods are motivated and backed by theoretical analysis.
- This work is a valuable contribution to the GNN attack literature.

__ Weaknesses__: - The execution of the method is not truly black-box; it requires the gradient information to compute the perturbation vector.
- That the perturbation vector is constant appears to be limited. It is possible to develop a more severe attack when the perturbations are more finely curated.

__ Correctness__: All sound correct.

__ Clarity__: This paper is very well written.

__ Relation to Prior Work__: The authors explain well the novelty of the work in context.

__ Reproducibility__: Yes

__ Additional Feedback__: - In Eqn (1), W should be on the right of H.
- The maximum node degree m does not seem to appear in the analysis. Could the authors elaborate how m plays a role in theory?
-----------
Post rebuttal:
I have read the response and thank the authors for their efforts.

__ Summary and Contributions__: The paper studies a new type of black box adversarial attack for the node classification task, assuming that the attacker has no access to the model parameters and only on the input. Furthermore, a constraint is included to select the subset of nodes to attack based on their importance which is deduced by a combination of random walk and structural information on the network and graph. Alteration at the node attributes level is performed. The experimental evaluation investigates the effect of these attacks at the loss and misclassification rate level, as the perturbation strength varies

__ Strengths__: - The application and relevance are potentially impactful, ultimately helping to understand the robustness of GNN models.
- The method is well described, with a clear notation while the claims are theoretically consolidated

__ Weaknesses__: - The experimental setup is rather limited with respect to the variety of GNN approaches tested. The paper only looks at the GCN model. Given that the effectiveness of the proposed method mainly relies on the experimental results, I'd recommend to provide a more extensive comparison.
- In Figure 1 the results for PageRank are missing
- The parameter k determining the neighbour is a key parameter to the selection strategy. Possibly, the authors could investigate the effect of varying it in more detail.

__ Correctness__: The theoretical and empirical analysis are both correct. Nevertheless, I would extend the empirical comparison as it is crucial to establish the relevance and contribution of the study.

__ Clarity__: I find the paper very clear and easy to read

__ Relation to Prior Work__: RWCS seem to be quite similar to PageRank, maybe the specific contribution of the proposed approach could be better highlighted

__ Reproducibility__: Yes

__ Additional Feedback__: Thank you to the authors for their effort in the rebuttal. I updated my recommendation.

__ Summary and Contributions__: This work presents an adversarial attack on Graph Neural Networks. The attack is designed for scenarios limited both in the number of accessible nodes and by the black-box nature of the attack. Despite these restrictions, the work presents indications that the attack is competitive and outperforms other attack strategies in similar settings.

__ Strengths__: The paper is well organized and concise. The assumed threat model incorporates a more realistic perspective than previous works. It presents both theoretical arguments and experimental verification in support of its major claims.

__ Weaknesses__: - The theoretical foundation builds off a number of assumptions, specifically assumption 5, that may limit its applicability in real world scenarios.
- The work claims the attack is limited by the adversaries limited access to the attack set (the vulnerable nodes of the graph). But there is no discussion on how this attack set is generated, its properties, or its effect on the proposed methodology.
- The three baseline attacks used in the experimental results are not sufficiently introduced.
- Further, it is not clear why the chosen baselines are more sufficient comparisons than recent works like [1] and [3].
- While the results indicate that in many cases the methodology decreases the performance of the model, in many of the scenarios there is only a minor improvement over the baseline comparisons.
- Lambda, the magnitude of perturbation on each node, is used as a measure of the strength of the attack, but intuitively it seems like J, the number of node perturbed, would be a more accurate judge of the attack strength.

__ Correctness__: Yes.

__ Clarity__: Yes

__ Relation to Prior Work__: Yes.

__ Reproducibility__: Yes

__ Additional Feedback__: The effectiveness of the attacks seems to be weaker when applied to GCN over the other two models. Do the authors have any intuitive understanding for why this occurs?

__ Summary and Contributions__: This paper considers the problem of black-box attacks on graph neural networks under a realistic constraint that attackers have access to only a subset of nodes.

__ Strengths__: The paper is theoretically grounded with good empirical results.

__ Weaknesses__: Readability of the paper can be improved.

__ Correctness__: yes

__ Clarity__: yes

__ Relation to Prior Work__: yes

__ Reproducibility__: Yes

__ Additional Feedback__: Thank you for your effort in preparing this rebuttal. I will keep my score and recommend acceptance.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The authors consider a novel setup of black-box attacks for GNNs with a constraint of limited node access. The approach seems sounds and empirical results are reasonable. It seems to me that implication of this work is to highlight the extent of vulnerability of GNNs. I could not see how the proposed approach for example can be used to design better defense schemes (e.g., adversarial training). Being said that, I feel highlighting this vulnerability is still important.
I would suggest authors to also look into scenarios where feature perturbation matrix \tau is optimized (using existing schemes). Further, adding a discussion on the implication of their findings and approach on adversarial defense would be useful.