NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:5315
Title:A Game Theoretic Approach to Class-wise Selective Rationalization

Reviewer 1


		
The paper studies an important problem, that is to learn models to localize "rationales" from a corpus of polarized opinions (such as product reviews), all in an "unsupervised" fashion (i.e. without the presence of ground truth rationales). Different from a previous pioneering work [18], this paper tries to further pin down rationales that are class-specific, so that both "pros" and "cons" would be simultaneously identified. To me this is a meaningful extension and the paper is largely well written and easy to follow. Detailed comments as follows, 1. I wonder if the authors have tried to learn class-specific rationales that are ground-truth *agnostic*. In a potentially simplified setting, you can still have class-specific rational generators (e.g. one for localizing "pro" rationales and another for "con"), but they do not necessarily need to be further tied with ground-truths so as to differentiate between "factual" and "counterfactual" (more on this later in comment 4)? 2. L.183: "For the discriminators, the outputs of all the times are max-pooled ..." - why choose max-pooling-over-time rather than a simple sequence classifier that directly outputs 0/1? 3. Eq.10: cite [18] since this is essentially the same regularizer introduced there? 4. For the baselines, if we adopt the simpler setting as outlined earlier in comment 1, it would be interesting to consider another model that's basically RNP with 2 "class-specific" generators that share parameters and take the class-label as an additional input. It's closely related to POST-EXP yet will benefit from a jointly trained predictor? 5. Why are the sparsity levels in Table 1 and 2 not exactly the same across the three models in comparison? Assuming the three models all make "independent" selection decisions (per L.179), it should be straightforward to enforce a common exact number of input tokens to keep, by selecting the top-K positions? 6. Comparing Table 2 with Table 1, the performance of POST-EXP sees a drastic drop, from consistently outperforming RNP to underperforming. Why is that? === post author response === Thanks for your clarifications. I believe having some of them in the paper will help your readers appreciate it more and clear away similar confusions. That said, I'm still not quite convinced why a class-specific yet ground-truth-agnostic RNP extension would yield degenerate results - are you suggesting the classification task per se encourages the model to exploit "spurious statistical cues" in the dataset more than the factural vs. counterfactural classification task?

Reviewer 2


		
ORIGINALITY & CLARITY: The paper picks out interesting direction and clearly motivates the idea in the introduction. The figures are helpful to understand the paper. QUALITY & SIGNIFICANCE: The main concern that I have with this paper is that it doesn't have head-to-head comparison with the existing literature, notably Lei et al EMNLP 16. Ideally, they should have performed evaluation that could be compared to the previous paper. From what I understand, Table 2 from EMNLP 16 paper could have been compared to Table 2 in this manuscript. I'm not convinced why the numbers are so different here. The subjective evaluation should have done more thoroughly. What was inter annotator agreement between different crowd workers? Given such a small sample size (100), strong agreement is necessary for the results to be meaningful. ==== AFTER THE AUTHOR RESPONSE: Thank you for clarification.

Reviewer 3


		
Specifically, by setting the architecture as an adversarial network the idea is that the predictor (in this case the discriminator) is trained to not only see good explanations, it is also exposed to counterfactual examples and can better tell the difference. This is a valuable and original use of adversarial models in the area of transparency. This submission is a game theoretic submission but the game itself is a very simple one so I would suggest that this paper be classified more under transparency as that is where it has the biggest contribution to current literature. Additionally while the 3-player game was very well modeled and explained at the end the implementation was reduced to a 2-player game with a generator and a discriminator. It would have helped if the authors had made this explicit and simplified their formulation. That said I would say this is a great contribution to the transparency literature and I enjoyed reading this submission.