NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:3972
Title:Nonzero-sum Adversarial Hypothesis Testing Games

Reviewer 1


		
Summary of the model: A set of samples is either drawn from p or from some q chosen by an attacker from a set Q. The defended must look at the samples and decide which is the case. The attacker gets utility if the defender decides incorrectly, but pays some cost for drawing the samples that depends on the choice of q. Summary of results: Shows existence of a mixed-strategy Nash equilibrium. Leaves open existence of pure strategy, or natural conditions under which the equilibrium is pure (it seems to me this would be a very nice and likely result, given some strengthening of the assumptions). Shows that in equilibrium, error rates concentrate to zero as the number of samples n grows large. Gives an asymptotic result on the rate of concentration, which is exponential in n. Originality: medium-high. Combines ideas from different literatures to ask an interesting new question at the intersection. Quality: medium. Well-written and organized, results are a strong starting point for the problem introduced. Clarity: medium-high in my opinion. Significance: medium-high in my opinion. Hopefully will spark more work in this are in both modeling and results; perhaps applications as well. Strengths: - the paper is well-written and well-organized. However, I admit that I did not fully understand where the model was going from the introduction, until I got to Section 3 itself. - the model is an interesting point to bridge from. It incorporates aspects of adversarial classification, hypothesis testing, and strategic attacker-defender games (strategic classification). - the results are nice starting points as well, although I feel it seems relatively clear these must be true given the assumption that p and Q are separated. Drawbacks: - not a constructive algorithm or computational results, though this feels possible. - model of attacker feels specific. It makes sense that the separation of Q from p is necessary for these results. But it feels natural that in many cases, the attacker could construct a strategy by first sampling from p, then making arbitrary (perhaps small) changes. Other: Many questions arise for me, but it doesn't mean the paper is incomplete -- these could be future work. Can we compute an equilibrium; is it unique? (If not, are there natural assumptions where it is?) Is it ever pure-strategy; are there assumptions under which it is always pure-strategy? It would also be interesting to understand the "gain" we get from knowing the game is not zero-sum and the adversary is strategic, and playing an equilibrium, versus if we had just played a min-max strategy and assumed the adversary only tried to minimize our utility. For works like this, one may find the "Stackelberg equilibrium" concept useful. It supposes that the defender first commits to a strategy, then the attacker responds. It is used in some of this security games literature and may make sense here as well. That may turn out to have computational or existence implications. Nice job on this paper and please respond to any of my comments if you feel it would be helpful. -------- After response: Thanks for the author response. It was clear, interesting, and informative.

Reviewer 2


		
The authors consider an adversarial hypothesis testing game. There is a fixed null hypothesis p known to both the attacker and an external agent. The type of the external agent is determined to either be an attacker with probability theta, or non-attacker with probability 1-theta. If the external agent is a non-attacker, the defender observes n examples drawn from p. If the external agent in an attacker, the defender observes n examples drawn from q != p selected by the attacker. A pure strategy of the attacker is a function which classifies its observations as coming from p or not p. The utility of the defender is to minimize a weighted combination of its Type I and Type II errors. The utility of the defender is the probability that samples from q are misidentified as coming from p (i.e., the defender makes a Type II error) minus a cost term for playing q. The cost function for the attacker is assumed to be continuous with a unique minimum q*. The paper also makes the additional assumption that distributions closer to p than q* (in KL) are not available as a pure strategy for the attacker (A4). While some regularity assumption seems necessary, this seems constraining essentially guaranteeing the attacker plays a point near q* in equilibrium, as this both minimizes cost and is difficult for the defender to detect. I wonder if a weaker assumption is possible here. Since the strategy sets belonging to the attacker and defender are uncountable, some work is required to establish the existence of a Nash. Moreover, the authors are able to prove that every Nash equilibrium can be replaced with one where the defender plays a likelihood, thresholding on q(x_n)/p(x_n) for observations x_n. This is an interesting result connecting the setting to ordinary hypothesis testing. Even without A4, the paper shows weak convergence results as n -> infinity. Namely, the defender’s error vanishes in the limit of infinite samples, and the attacker converges to the minimum cost distribution. Finally, the main result of the paper states that the defender’s error vanishes exponentially in the limit, and characterizes the exponent of the decay. The exponent from classical hypothesis testing is recovered. Overall the paper is clear and easy to read, and the results are interesting.

Reviewer 3


		
I enjoyed reading this submission and would support its publication in NeurIPS. The work studies an adversarial testing situation in which a defender accepts or rejects a null hypothesis based on n independent samples, the null distribution is fixed, and an adversary chooses the alternative distribution. This is a classical setting of minimax testing, except that the adversary has an additional cost associated to picking each alternative (and this cost function is known to the defender). I find the problem and perspective original, and both the paper and the proofs are well-written with attention paid to details. The specific adversarial setting considered in this paper is a bit simple, as the cost c(q) does not vary with n so that the adversary's strategy degenerates (in some sense) in the limit of large n. But I think this paper may lead to interesting follow-up work that considers more nuanced asymptotic settings. A couple comments/questions: 1. Would some weaker version of Assumption (A4) be sufficient in the Neyman-Pearson (not Bayesian) framework? For example, in the 1-dimensional example of Section 5, the optimal test by the defender against the fixed alternative q* in the Bayesian setting might not achieve asymptotically perfect power against a different alternative q. But for the Neyman-Pearson setting, at least intuitively it seems like the acceptance region of an optimal test should be a "vanishingly small neighborhood" of the null distribution p, so I might intuitively expect that the result of Lemma A.6 should always hold. Perhaps there is a different counterexample for the Neyman-Pearson setting in higher dimensions? In any case, I'd appreciate some discussion of this, perhaps deferred to the supplement. 2. Is it necessary to think about a defender's mixed strategy over Phi_n, which is already a space of randomized tests? (This feels like too many levels of mixing...) Is it true that for any mixed strategy sigma_n over Phi_n, the defender can always achieve the same objective with the marginalized pure strategy phi_n(x) = integral phi(x) dsigma_n(phi)? ------------------------ Post-rebuttal: Thanks to the authors for the responses, clarification, and further discussion.