NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:760
Title:Model Compression with Adversarial Robustness: A Unified Optimization Framework

Reviewer 1


		
Some experimental details will need to be clarified in rebuttal: - "Unless otherwise specified, we set the perturbation magnitude to be 76 for MNIST and 4 for the other three datasets" why choose those specific magnitudes? - "We set PGD attack iteration numbers n to be 16 for MNIST and 7 for the other three datasets" could ATML stand robust to more iterations, e.g., >= 20? - Have you used random starting to alleviate gradient masking? - Would the authors consider releasing their codes for reproducibility? - Two missing references, that both empirically studied the preservation of robustness under quantization: "Robustness of Compressed Convolutional Neural Networks", IEEE BigData 2018 "To compress or not to compress: Understanding the Interactions between Adversarial Attacks and Neural Network Compression", SysML 2019.

Reviewer 2


		
This paper formulates a new problem and proposed a reasonable algorithm. However, I am not totally convinced the current jointly optimization is significantly better a two steps approach of (1) first do adversarial learning and then (2) compress the model by pruning and compression. As shown in Figure 2, the performance of the proposed method is almost the same as the two step approach (adversarial learning + pruning). Moreover, the current paper could be improved in the following aspects: - eq (3): what about the non-conversational layers? - line 128-line 133: it is not clear what "the nonuniform quantization" means, and how it leads to the equation between line 132 and line 133. - eq (4): in this paper f^adv seems to be limited to PSD attack. I wish to see results of other adversarial learning. - Table 1: all the networks here are relatively small, for which the compression seems not very important. Is it possible to provide experiments for large neural networks?

Reviewer 3


		
The idea seems interesting. It is aligned with the recent wave of studying standard versus robust accuracy; and focuses on a specific, relatively less noticed problem field (compression). The loss of robustness was overlooked in most CNN compression literature. This paper addressed it well and would bring in new insights. The authors also spent much effort into exploring different settings of model compression and adversarial training, which is appealing in revealing the relationship between the two aspects. The writeup is mature and clear to follow; the literature review of robustness-compactness relationship (Sec 1.1) is very thorough and interesting.