Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
The present paper is about robust learnability, and important problem for our ML community. The authors provide both theoretical and methodological contributions to address sample complexity and computational efficiency in the robust learning framework. There are many results of importance in this paper, namely a nice characterization of what is the “strength” of an adversary, but the most interesting result is a negative one. It is a strong impossibility result saying that the class of monotone conjunctions is not efficiently robustly learnable when adversary can flip \omega(log(n)) bits. This paper is quite well written, although it will be greatly improved if the authors manage to add the content of their rebuttal in the camera ready version. Because of these days importance of the subject, and because of the quality of the results, I am recommending acceptation with a short talk.