NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
This paper is concerned with automating the search for data augmentation transformations for image classification with DNN models. It does so in a way that avoids having to re-train (or fine-tune) the model for every transformation scored. This leads to a method which, compared to previous SotA (AutoAugment), is very much faster, but is shown to provide results of similar quality. While both this work and AutoAugment use a carefully choosen search space, for which neither is strongly outperforming random search over this space, the dramatic reduction in resource need over AutoAugment justifies its publication. However, the authors are asked provide further results in the final version, in particular a more thorough comparison against random search baselines with the same advanced search space, also including random repetitions, in order to convince readers their method improves enough over random search in order to justify its added complexity.