NeurIPS 2020

Towards Interpretable Natural Language Understanding with Explanations as Latent Variables


Meta Review

This paper proposes an EM framework for explainable language processing. Strength • The idea is new and neat. • The proposed method is technically sound. • Experiments are conducted to support the claims • The paper is generally well-written. Weakness • The experiment part and presentation can be further improved. The authors are suggested to further improve the quality of the paper based on the reviewers' comments. NOTE FROM PROGRAM CHAIRS: For the camera-ready version, please expand your broader impact statement to include a more substantive discussion on the potential negative impacts of your work, as well as mitigations. One reviewer has noted that "This work will raise ethical concerns if the generated explanations are very incorrect, especially if deployed in healthcare. There is no solid quantitative result/ metric for the quality of the explanations produced in the work."