NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:8790
Title:Certifying Geometric Robustness of Neural Networks

Reviewer 1


		
This work borrows from recent research on certifying robustness properties of neural networks on image datasets to Linfinity norm perturbations, and extends this to certification against geometric attacks. More specifically, previous works used sound approximations on the outputs of non-linear activations to certify against worst-case Linfinity based perturbations. Accumulating these approximations gives a (potentially loose) output region that can be certified as robust. This work extends and develops techniques to handle geometric based attacks such as rotation and scaling. Overall, I found this paper to be well written. I particularly appreciated the running example style employed in this paper for exposition. I list my suggestions and concerns below: 1. Why is the input always assumed to be perturbed with Linfinity noise? There is no justification for this in the text as far as I can see, its inclusion is unnecessary and somewhat confusing given the main contribution is to defend against other kinds of perturbations. 2. It is surprising that the networks tested have non-negligible robustness to geometric attacks (even the undefended networks). Engstrom et al [1] (and others) have shown that simple attacks such rotating the input usually causes a misclassification, why are the verified networks here seemingly robust to these attacks? 3. The networks verified are extremely small. How difficult would it be to scale to larger datasets / networks? It seems that due to the optimization & sampling mechanism employed this would suffer more than related work (such as IBP [2] which only requires two passes through the network to propagate linear bounds). For example, to reduce sampling error to <= 0.5% takes over 45s - this seems like it has a scalability issue. 4. Adversarial attacks are commonly restricted to small epsilon values for the Linfinity norm since this is highly likely to preserve semantic meaning of the input. Where in the paper do you define a similarity metric under geometric attacks? For example, an a horizontally flipped two digit can be interpreted as a five - yet your certification scheme would not exclude such cases. [1] Engstrom et al. A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations [2] Gowal et al. On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models

Reviewer 2


		
Significance: The field of adversarial robustness and verification currently focuses mostly on various L_p metrics which are not aligned with human perception (e.g. a shift of an image does not change the content but has high L_p differenc). Here the authors take various geometric transformations and provide certifications for the robustness. Style: Well written and clear structure and intuitive explanation with a running example. Questions: * How does this relate to vector field based transformations? Is it a subset thereof?

Reviewer 3


		
Originality: The method is definitely new and previous work that attempted to do verification on rotation attacks is discussed. Quality: -> The method described is technically sound. The only caveat that I have is with regards to the scalability. Unless I'm mistaken, this requires to, for each split of the hyperparameter, for each pixel, solve a LP using Gurobi to obtain the linear bound candidates, and then perform the Branch and Bound lipshitz optimization to get the offset, which seems extremely expensive. What do the runtime in Table 3 correspond to? Mean verification time for an image? Just the time to derive the linear bounds? Time to derive the linear bounds for 1 pixel? -> With regards to evaluating the performance, the method compares very favourably to previous techniques (interval analysis) which are pretty inadapted to the task. This shows why the method proposed is interesting but it would also be beneficial to get an estimate of the limitation of the method. One ideal way would be to get some upper bounds on the verifiability that can be achieved, maybe as the result from some geometrical adversarial attacks? Clarity: The methods is described sufficiently well that it would be possible for a motivated reader to reproduce the results. Significance: The improvement proposed in this paper is of interest because while a significant amount of progress was made in bounding the outputs of Neural Networks given initial constraints, there hasn't been as much work in describing how to encode richer properties beyond linear constraints or norm-limited perturbations. Also, some more local comments that I had at various places: L.87, paragraph "Geometric Image transformations" I don't understand the motivation for assuming that all the pixels are first perturbed by a small L-inf noise. If the robustness of interest is against rotation, what is the point of including also robustness to small perturbation? L.140, Eq 5, Eq 6. V has not been introduced. L.155 to 159: This paragraph exactly describes the process of Branch-and-Bound, which is extremely standard in optimization. This might be good to at least point the connection for the benefits of the readers. More generally, on the subject of this Branch and Bound procedure, the method proposed here is to run a branch and bound type method to obtain the offset to add to the linear constraint such that it is sound. Once this offset has been computed, it's not entirely sure that the bound on the intensity will be good enough to guarantee robustness. Would there be some benefits in just using a looser bound for the nu_l offset (essentially skipping the branching at that stage to save computation) but instead doing branching at a higher level so as to refine the splits on the possible parameters if the full verification (using DeepPoly) isn't tight enough? This would have the added benefits to make things significantly tighter (imagine in Figure 3 being able to introduce a split at pi/8, this additional split would massively reduce the volume of the overapproximation) l.223. typo "verifer" [Updated review] I read the comments by the other reviewers, as well as the rebuttal by the author. At the moment, my opinions are as follows: -> The scalabilty of the network is in line with usual formal verification of NN work. -> According to their comment (l.37 rebuttal), given enough computational budgets (I hope they would put the time it took but I can imagine a reasonable estimate) they can actually perform verification quite well (the bounds are not ridiculously loose) -> I'm satisfied with the explanation given by the authors with regards to their decision to get as tight as possible bounds before passing along to the verifiers, due to the difference in runtime of the two components of the pipelines. I think that the paper is interesting and in my experience, the problem they tackle is one that regularly comes up in discussion with people interested in formal verification of NN, so there would definitely be interest by the community once published. I'm raising my score as both my question have been adressed