NeurIPS 2020

Security Analysis of Safe and Seldonian Reinforcement Learning Algorithms


Meta Review

All the reviewers support acceptance for the contributions, notably improvements to the robustness of RL algorithms to adversarial attacks, and a clear exposition on how these methods can be applied to real world problems. I also recommend acceptance. Please consider revising the paper to address the concerns raised in the reviews and rebuttal, in particular to better explain the scope of the work. Separately, it may be useful to extend the broader impact statement to inform a casual reader that a mathematical safety guarantee on an algorithm is not a replacement for domain specific safety requirements (for example, the diabetes treatment would still need oversight for medical safety).