The paper provides novel and solid contributions, both from the theoretical and the empirical standpoints. Its main message is that robustness to adversarial attacks and accuracy are not necessarily contradictory and can be achieved at the same time. To illustrate this claim, the authors observe that many machine learning problems exhibit a natural separation of data which is larger than the size of adversarial attacks. They also demonstrate and prove that smoothness can be used for ensuring both accuracy and robustness. The particular implementation of the smoothness criterion proposed in this paper received some criticism in reviews but this would hopefully motivate authors and other researchers to investigate alternative methods for ensuring smoothness of the decision functions.