NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:5182
Title:A Unified Framework for Data Poisoning Attack to Graph-based Semi-supervised Learning

Reviewer 1


		
As the topic of the this work is very interesting, I'm not frustrated by the presentation, which has a large space for improvement. First, the motivation of the work is not so clear. Why is that an important concern to be able to poison a G-SSL model? The three references [1-3] provided seem at least a decade ago. Are they widely applied in modern security/privacy sensitive applications? Are there recent works bringing these techniques into consideration? Are the proposed methods applicable to recent techniques as well? Answering these questions may help to justify the motivation better. The explanation of G-SSL and the formulation in 3.1 is very confusing. In particular, we need to understand what the subscriptions i,j, iterate over for. For example, in (1), do they loop over all labeled data? unlabeled data? or one labeled one unlabeled? Also, the description "the unlabeled data is then predicted through energy minimization principle" doesn't seem to bound any variables in (1) to the "unlabeled data". For the formulation, capital $X_l$ and $X_u$ seem to indicate two sets of data points, rather than two data points. Then I cannot make sense of $X_l+\Delta_x$ in (2) (p3, L134). This confusion makes me unable to make sense about the threat model for the attack. Does it mean all data can be manipulated as long as the sum of the perturbation is small enough? or only a small group of instances can be perturbed? If the later, how these groups are chosen? The experiments also have many presentation issues. The datasets are not explained at all, except mnist17. Although a reference is provided in the footnote, we require the paper to be self-contained. In Fig 2, different $n_l$ are used for comparison, but not explained. I first find this notion in 3.5, in the summation of the constraint, without explanation. Does it mean the total number of labeled data? Despite these presentation issues, I'm concerned about how should I interpret the hyper-parameters such as d_max and RMSE. Sure, RMSE 0.2 increases to 0.3. But doesn't mean very bad? Some real examples, and qualitative analysis may clarify this concern. Last but not least, it's questionable how generic is the proposed unified framework. It seems to be applicable to only a specific form of learning algorithms defined in 3.1. These algorithms seem to require to construct a adjacency matrix from kernels, which may not scale well to large datasets? In Sec 4, it seems that datasets of more than 20,000 nodes are used, but I don't see details about how the G-SSL methods can handle them. Are they unlabeled data or labeled data? If my understanding of n_l is correct, then it seems that the G-SSL approaches can handle up to 2000 labeled nodes, which then can make sense. Again, all these details are very important to evaluate this work, but missing. I do not check the proofs of the theorems line-by-line, so I have no comments over them. Overall, I think this paper needs more work to resolve the clarity issue, and justify the motivation better. Some minor issues: Thm 2 is refering a theorem ??. Shouldn't the input for MNIST be of dimension 28*28=784 (rather than 780 in Table 4.1)? Sec 4.4 says three approximate solvers are compared, but it's actually two presented. PageRank should be cited.

Reviewer 2


		
This paper proposes a unifying objective function for the effectiveness of a data poisoning attack to graph-based semi-supervised learning (G-SSL). Different than the other works in the literature, the attack is considered to take place during training, so the training data is poisoned. Although the problem is discussed in a fair level of generality in terms of the components of the trained data to be changed, the authors restrict the attention to cases where the ‘outcome’ component of the training data is changed. The authors then propose two different algorithms to find the best attack under given constraints according to the objective function — one algorithm for regression where real-valued outcomes are assumed and the other algorithm for classification where the outcomes are discrete (in particular, binary for this work). While the first algorithm guarantees convergence and has a linear converge rate, the second algorithm is shown to perform well empirically. The paper is well written. As a non-expert, the motivation, the problem definition, the proposed methods, and the discussion along the presentation of the results were clear to me as a non-expert reader. Taking granted the authors’ claim that the a new problem is studied, the extent of experiments is satisfying. The unifying approach to formulate the problem of finding the best attach is also an important contribution.

Reviewer 3


		
In this paper, the authors propose a framework for data poisoning attack to graph-based semi-supervised problems. The novelty of this paper is that the data poisoning attack is introduced during training time. For label poisoning to regression task, the authors propose a gradient based algorithm to solve this nonconvex trust region problem, which can converge to a global minimum in linear rate. For label poisoning attack to classification task, which is an NP-hard integer programming problem, the authors adopt stochastic gradient decent optimization algorithm by leveraging Monte Carlo sampling to get gradient estimator. The authors conducted the experiments quite thoroughly, which show the effectiveness of the proposed methods. In general, the proposed algorithm is well motivated and mathematically convincing, and the paper is sufficiently well written. The authors give detailed theoretical analysis, and the proof is neat.

Reviewer 4


		
I thank the authors for the response. My score remains unchanged after reading all reviews and the response. Significance: The problem of data poisoning is important and inherently tied with algorithmic robustness. The paper initiates a new line of research by studying data poisoning in the context of semi-supervised learning. The work is significant from this perspective, and some follow-up works on this topic are expected. However, the authors fail to provide real scenarios, where a study on data poisoning in a semi-supervised setup would be useful. Originality: Although the problem is somewhat new, the novelty of the methods / theory is not clear. The unified optimisation problem in eqn (2) is certainly interesting, and so is its extension to regularised semi-supervised formulation in the appendix. Two algorithms are proposed: - for regression, it is shown that the problem is non-convex and a trust region solver is presented along with convergence guarantees. There exist several other methods to solve similar non-convex problems. It is not clear how the analysis is different / more challenging when compared to existing theory. - for classification, there is no theory and the proposed approach is simply greedy / epsilon-greedy. The challenges, if any, for extending such methods to the poisoning scenario is not clear Quality and Clarity: Apart from concerns related to originality, the quality of the work (both theory and numerics) is good. The paper is clearly written in most parts, but contains several typos throughout (see below) Minor comments: - line167: IF H = - K'K, shouldn't it be negative definite (all eigenvalues non-positive) instead of indefinite? - line168: M, epsilon undefined variables - Thm 2 statement: reference to Thm 1 missing - line236: "for classification" - Fig 4: There are 2 algorithms, not 3