NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:5500
Title:Functional Adversarial Attacks


		
This paper proposed functional adversarial attack, by applying the same transform on each of the input features (e.g. pixels) in order to fool the classifier. As a practical algorithm, the authors proposed ReColorAdv which changes the colors of an input image uniformly. The authors discussed a constrained optimisation approach to find the functional adversarial example, and proposed to combine different threat models (additive and functional) to construct stronger attacks. Experiments are performed on CIFAR-10 and the proposed method successfully fools a ResNet classifier with defense methods such as adversarial training and TRADES. All the reviewers are experts in designing adversarial attacks. They found the proposed approach novel overall, but initially they were concerned about the strength of the attack. This concern is addressed in the author feedback. I would suggest the following revision for camera ready: 1. Add in the extra experiments provided in the author feedback, to demonstrate further the strength of the attack; 2. If possible, discuss possible defense against the proposed attack, this will help the readers to understand how strong the attack is; 3. Discuss in detail the difference between the proposed approach and existing approaches (e.g. unconstrained adversarial examples).