{"title": "Fast AutoAugment", "book": "Advances in Neural Information Processing Systems", "page_first": 6665, "page_last": 6675, "abstract": "Data augmentation is an essential technique for improving generalization ability of deep learning models. Recently, AutoAugment \\cite{cubuk2018autoaugment} has been proposed as an algorithm to automatically search for augmentation policies from a dataset and has significantly enhanced performances on many image recognition tasks. However, its search method requires thousands of GPU hours even for a relatively small dataset. In this paper, we propose an algorithm called Fast AutoAugment that finds effective augmentation policies via a more efficient search strategy based on density matching. In comparison to AutoAugment, the proposed algorithm speeds up the search time by orders of magnitude while achieves comparable performances on image recognition tasks with various models and datasets including CIFAR-10, CIFAR-100, SVHN, and ImageNet. Our code is open to the public by the official GitHub\\footnote{\\url{https://github.com/kakaobrain/fast-autoaugment}} of Kakao Brain.", "full_text": "Fast AutoAugment\n\nSungbin Lim\u2217\u2020\n\nUNIST\n\nsungbin@unist.ac.kr\n\nTaesup Kim\n\nMILA, Universit\u00e9 de Montr\u00e9al, Canada\n\ntaesup.kim@umontreal.ca\n\nIldoo Kim\u2217\nKakao Brain\n\nildoo.kim@kakaobrain.com\n\nChiheon Kim\nKakao Brain\n\nchiheon.kim@kakaobrain.com\n\nSungwoong Kim\n\nKakao Brain\n\nswkim@kakaobrain.com\n\nAbstract\n\nData augmentation is an essential technique for improving generalization ability\nof deep learning models. Recently, AutoAugment [5] has been proposed as an\nalgorithm to automatically search for augmentation policies from a dataset and has\nsigni\ufb01cantly enhanced performances on many image recognition tasks. However,\nits search method requires thousands of GPU hours even for a relatively small\ndataset. In this paper, we propose an algorithm called Fast AutoAugment that \ufb01nds\neffective augmentation policies via a more ef\ufb01cient search strategy based on density\nmatching. In comparison to AutoAugment, the proposed algorithm speeds up the\nsearch time by orders of magnitude while achieves comparable performances on\nimage recognition tasks with various models and datasets including CIFAR-10,\nCIFAR-100, SVHN, and ImageNet. Our code is open to the public by the of\ufb01cial\nGitHub3 of Kakao Brain.\n\n1\n\nIntroduction\n\nDeep learning has become a state-of-the-art technique for computer vision tasks, including object\nrecognition [16, 28, 37], detection [23, 29], and segmentation [4, 11]. However, deep learning models\nwith large capacity often suffer from over\ufb01tting unless signi\ufb01cantly large amounts of labeled data are\nsupported. Data augmentation (DA) has been shown as a useful regularization technique to increase\nboth the quantity and the diversity of training data. Notably, applying a carefully designed set of\naugmentations rather than naive random transformations in training improves the generalization\nability of a network signi\ufb01cantly [21, 26]. However, in most cases, designing such augmentations has\nrelied on human experts with prior knowledge on the dataset.\nWith the recent advancement of automated machine learning (AutoML), there exist some efforts\nfor designing an automated process of searching for augmentation strategies directly from a dataset.\nAutoAugment [5] uses reinforcement learning (RL) to automatically \ufb01nd data augmentation policy\nwhen a target dataset and a model are given. It samples an augmentation policy at a time using a\ncontroller RNN, trains the model using the policy, and gets the validation accuracy as a reward to\n\n\u2217Equal Contribution\n\u2020This work is done at Kakao Brain\n3https://github.com/kakaobrain/fast-autoaugment\n\n33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.\n\n\f5000\n1000\n15000\n\n3.5\n1.5\n450\n\nAutoAug [5]\n\nFast AutoAug\n\nSVHN\n\nImageNet\n\nDataset\nCIFAR-10\n\nupdate the controller. AutoAugment especially achieves a dramatic improvement in performances on\nseveral image recognition benchmarks. However, AutoAugment requires thousands of GPU hours\neven in a reduced setting, in which the size of the target dataset and the network is small. Recently\nproposed Population Based Augmentation (PBA) [15] is a method to deal with this problem, which\nis based on population-based training method of hyperparameter optimization. In contrast to previous\nmethods, we propose a new search strategy that does not require any repeated training of child models.\nInstead, the proposed algorithm directly searches for augmentation policies that maximize the match\nbetween the distribution of augmented split and the distribution of another, unaugmented split via a\nsingle model.\nIn this paper, we propose an ef\ufb01cient search\nmethod of augmentation policies, called Fast\nAutoAugment, motivated by Bayesian DA [36].\nOur strategy is to improve the generalization\nperformance of a given network by learning the\naugmentation policies which treat augmented\ndata as missing data points of training data.\nHowever, different from Bayesian DA, the pro-\nposed method recovers those missing data points\nby the exploitation-and-exploration of a family\nof inference-time augmentations [33, 34] via\nBayesian optimization in the policy search phase. We realize this by using an ef\ufb01cient density\nmatching algorithm that does not require any back-propagation for network training for each policy\nevaluation. The proposed algorithm can be easily implemented by making good use of distributed\nlearning frameworks such as Ray [24].\nOur experiments show that the proposed method can search augmentation policies signi\ufb01cantly faster\nthan AutoAugment (see Table 1), while retaining comparable performances to AutoAugment on\ndiverse image datasets and networks, especially in two use cases: (a) direct augmentation search on\nthe dataset of interest, (b) transferring learned augmentation policies to new datasets. On ImageNet,\nwe achieve an error rate of 19.4% for ResNet-200 trained with our searched policy, which is 0.6%\nbetter than 20.0% with AutoAugment.\nThis paper is organized as follows. First, we introduce related works on automatic data augmentation\nin Section 2. Then, we present our problem setting to achieve the desired goal and suggest Fast\nAutoAugment algorithm to solve the objective ef\ufb01ciently in Section 3. Finally, we demonstrate the\nef\ufb01ciency of our method through comparison with baseline augmentation methods and AutoAugment\nin Section 4.\n\nTable 1: GPU hours comparison of the proposed\nmethod with [5]. We estimate computation cost\nwith an NVIDIA Tesla V100 while AutoAugment\nmeasured computation cost in Tesla P100.\n\n2 Related Work\n\nThere are many studies on data augmentation, especially for image recognition. On the benchmark\nimage dataset, such as CIFAR and ImageNet, random crop, \ufb02ip, rotation, scaling, and color transfor-\nmation, have been performed as baseline augmentation methods [10, 21, 30]. Mixup [41], Cutout\n[7], and CutMix [39] have been recently proposed to either replace or mask out the image patches\nrandomly and obtained more improved performances on image recognition tasks. However, these\nmethods are designed manually based on domain knowledge.\nNaturally, automatically \ufb01nding data augmentation methods from data in principle has emerged to\novercome the performance limitation that originated from a cumbersome exploration of methods\nby a human. Smart Augmentation [22] introduced a network that learns to generate augmented\ndata by merging two or more samples in the same class. [32] employed a generative adversarial\nnetwork (GAN) [9] to generate images that augment datasets. Bayesian DA [36] combined Monte\nCarlo expectation maximization algorithm with GAN to generate data by treating augmented data as\nmissing data points on the distribution of the training set.\nDue to the remarkable successes of NAS algorithms on various computer vision tasks [19, 28, 42],\nseveral current studies also deal with automated search algorithms to obtain augmentation policies for\ngiven datasets and models. The main difference between the previously learned methods and these\nautomated augmentation search methods is that the former methods exploit generative models to\ncreate augmented data directly, whereas the latter methods \ufb01nd optimal combinations of prede\ufb01ned\n\n2\n\n\fFigure 1: An example of augmented images via a sub-policy in the search space S. Each sub-policy\n\u03c4 consists of 2 operations; for instance, \u03c4 =[cutout, autocontrast] is used in this \ufb01gure. Each\noperation \u00afO(\u03c4 )\nhas two parameters: the probability pi of calling the operation and the magnitude\n\u03bbi of the operation. These operations are applied with the corresponding probabilities. As a result,\na sub-policy randomly maps an input data to the one of 4 images. Note that the identity map (no\naugmentation) is also possible with probability (1 \u2212 p1)(1 \u2212 p2).\n\ni\n\ntransformation functions. AutoAugment [5] introduced an RL based search strategy that alternately\ntrained a child model and RNN controller and showed the state-of-the-art performances on various\ndatasets with different models. Recently, PBA [15] proposed a new algorithm which generates\naugmentation policy schedules based on population based training [17]. Similar to PBA, our method\nalso employs hyperparameter optimization to search for optimal policies but uses Tree-structured\nParzen Estimator (TPE) algorithm [2] for practical implementation.\n\n3 Fast AutoAugment\n\nIn this section, we \ufb01rst introduce the search space of the symbolic augmentation operations and\nformulate a new search strategy, ef\ufb01cient density matching, to \ufb01nd the optimal augmentation policies\nef\ufb01ciently. We then describe our implementation based on Bayesian hyperparameter optimization\nincorporated into a distributed learning framework.\n\n3.1 Search Space\nLet O be a set of augmentation (image transformation) operations O : X \u2192 X de\ufb01ned on the input\nimage space X . Each operation O has two parameters: the calling probability p and the magnitude \u03bb\nwhich determines the variability of operation. Some operations (e.g. invert, flip) do not use the\nmagnitude. Let S be the set of sub-policies where a sub-policy \u03c4 \u2208 S consists of N\u03c4 consecutive\noperations { \u00afO(\u03c4 )\nn ) : n = 1, . . . , N\u03c4} where each operation is applied to an input image\nsequentially with the probability p as follows:\n\nn (x; p(\u03c4 )\n\nn , \u03bb(\u03c4 )\n\n(cid:26)\n\n\u00afO(x; p, \u03bb) :=\n\nO(x; \u03bb)\nx\n\n: with probability p\n: with probability 1 \u2212 p.\n\n(1)\n\nHence, the output of sub-policy \u03c4 (x) can be described by a composition of operations as\n\n\u02dcx(n) = \u00afO(\u03c4 )\n\nn (\u02dcx(n\u22121)), n = 1, . . . , N\u03c4\n\nwhere \u02dcx(0) = x and \u02dcx(N\u03c4 ) = \u03c4 (x). Figure 1 shows a speci\ufb01c example of augmented images by \u03c4.\nNote that each sub-policy \u03c4 is a random sequence of image transformations which depend on p and\n\u03bb, and this enables to cover a wide range of data augmentations. Our \ufb01nal policy T is a collection of\nNT sub-policies and T (D) indicates a set of augmented images of dataset D transformed by every\nsub-policies \u03c4 \u2208 T :\n(2)\n\n(cid:91)\n\nT (D) =\n\n{(\u03c4 (x), y) : (x, y) \u2208 D}\n\n\u03c4\u2208T\n\n3\n\n(cid:48)(cid:83)(cid:74)(cid:72)(cid:74)(cid:79)(cid:66)(cid:77)(cid:42)(cid:69)(cid:70)(cid:79)(cid:85)(cid:74)(cid:85)(cid:90)(cid:48)(cid:81)(cid:70)(cid:83)(cid:66)(cid:85)(cid:74)(cid:80)(cid:79)(cid:1)(cid:18)(cid:1)(cid:9)(cid:68)(cid:86)(cid:85)(cid:80)(cid:86)(cid:85)(cid:10)\u2327(x)AAACIHicbVDLSgMxFL2xPmp9tbp0EyxC3ZQZEXRZdOOygn1AO5RMJtOGZjJDkhFL6Ue41Z1f405c6teYaWehbQ8EDufcyz05fiK4No7zjTYKm1vbO8Xd0t7+weFRuXLc1nGqKGvRWMSq6xPNBJesZbgRrJsoRiJfsI4/vsv8zhNTmsfy0UwS5kVkKHnIKTFW6vQNSWvPF4Ny1ak7c+BV4uakCjmagwoq9IOYphGThgqidc91EuNNiTKcCjYr9VPNEkLHZMh6lkoSMe1N53ln+NwqAQ5jZZ80eK7+3ZiSSOtJ5NvJiJiRXvYyca2XKUqHeq0Z6OzaUjQT3nhTLpPUMEkXycJUYBPjrC0ccMWoERNLCFXcfg7TEVGEGttpydbmLpe0StqXddepuw9X1cZtXmARTuEMauDCNTTgHprQAgpjeIFXeEPv6AN9oq/F6AbKd07gH9DPL4JPoqI=AAACIHicbVDLSgMxFL2xPmp9tbp0EyxC3ZQZEXRZdOOygn1AO5RMJtOGZjJDkhFL6Ue41Z1f405c6teYaWehbQ8EDufcyz05fiK4No7zjTYKm1vbO8Xd0t7+weFRuXLc1nGqKGvRWMSq6xPNBJesZbgRrJsoRiJfsI4/vsv8zhNTmsfy0UwS5kVkKHnIKTFW6vQNSWvPF4Ny1ak7c+BV4uakCjmagwoq9IOYphGThgqidc91EuNNiTKcCjYr9VPNEkLHZMh6lkoSMe1N53ln+NwqAQ5jZZ80eK7+3ZiSSOtJ5NvJiJiRXvYyca2XKUqHeq0Z6OzaUjQT3nhTLpPUMEkXycJUYBPjrC0ccMWoERNLCFXcfg7TEVGEGttpydbmLpe0StqXddepuw9X1cZtXmARTuEMauDCNTTgHprQAgpjeIFXeEPv6AN9oq/F6AbKd07gH9DPL4JPoqI=AAACIHicbVDLSgMxFL2xPmp9tbp0EyxC3ZQZEXRZdOOygn1AO5RMJtOGZjJDkhFL6Ue41Z1f405c6teYaWehbQ8EDufcyz05fiK4No7zjTYKm1vbO8Xd0t7+weFRuXLc1nGqKGvRWMSq6xPNBJesZbgRrJsoRiJfsI4/vsv8zhNTmsfy0UwS5kVkKHnIKTFW6vQNSWvPF4Ny1ak7c+BV4uakCjmagwoq9IOYphGThgqidc91EuNNiTKcCjYr9VPNEkLHZMh6lkoSMe1N53ln+NwqAQ5jZZ80eK7+3ZiSSOtJ5NvJiJiRXvYyca2XKUqHeq0Z6OzaUjQT3nhTLpPUMEkXycJUYBPjrC0ccMWoERNLCFXcfg7TEVGEGttpydbmLpe0StqXddepuw9X1cZtXmARTuEMauDCNTTgHprQAgpjeIFXeEPv6AN9oq/F6AbKd07gH9DPL4JPoqI=AAACIHicbVDLSgMxFL2xPmp9tbp0EyxC3ZQZEXRZdOOygn1AO5RMJtOGZjJDkhFL6Ue41Z1f405c6teYaWehbQ8EDufcyz05fiK4No7zjTYKm1vbO8Xd0t7+weFRuXLc1nGqKGvRWMSq6xPNBJesZbgRrJsoRiJfsI4/vsv8zhNTmsfy0UwS5kVkKHnIKTFW6vQNSWvPF4Ny1ak7c+BV4uakCjmagwoq9IOYphGThgqidc91EuNNiTKcCjYr9VPNEkLHZMh6lkoSMe1N53ln+NwqAQ5jZZ80eK7+3ZiSSOtJ5NvJiJiRXvYyca2XKUqHeq0Z6OzaUjQT3nhTLpPUMEkXycJUYBPjrC0ccMWoERNLCFXcfg7TEVGEGttpydbmLpe0StqXddepuw9X1cZtXmARTuEMauDCNTTgHprQAgpjeIFXeEPv6AN9oq/F6AbKd07gH9DPL4JPoqI=x2XAAACKnicbVDLSsNAFL3T+qj10VaXboJFcFUSEXRZdOOygn1AE8pkMmmHTiZhZiKW0C9xqzu/xl1x64c4abPQtgcGDufcyz1z/IQzpW17gUrlnd29/cpB9fDo+KRWb5z2VJxKQrsk5rEc+FhRzgTtaqY5HSSS4sjntO9PH3K//0KlYrF41rOEehEeCxYygrWRRvXaq8uEG2E9IZhng/mo3rRb9hLWJnEK0oQCnVEDld0gJmlEhSYcKzV07ER7GZaaEU7nVTdVNMFkisd0aKjAEVVetkw+ty6NElhhLM0T2lqqfzcyHCk1i3wzmWdU614ubvVyRapQbTUDlV9bi6bDOy9jIkk1FWSVLEy5pWMr780KmKRE85khmEhmPmeRCZaYaNNu1dTmrJe0SXrXLcduOU83zfZ9UWAFzuECrsCBW2jDI3SgCwRSeIN3+ECf6Ast0PdqtISKnTP4B/TzC7qfpuU=AAACKnicbVDLSsNAFL3T+qj10VaXboJFcFUSEXRZdOOygn1AE8pkMmmHTiZhZiKW0C9xqzu/xl1x64c4abPQtgcGDufcyz1z/IQzpW17gUrlnd29/cpB9fDo+KRWb5z2VJxKQrsk5rEc+FhRzgTtaqY5HSSS4sjntO9PH3K//0KlYrF41rOEehEeCxYygrWRRvXaq8uEG2E9IZhng/mo3rRb9hLWJnEK0oQCnVEDld0gJmlEhSYcKzV07ER7GZaaEU7nVTdVNMFkisd0aKjAEVVetkw+ty6NElhhLM0T2lqqfzcyHCk1i3wzmWdU614ubvVyRapQbTUDlV9bi6bDOy9jIkk1FWSVLEy5pWMr780KmKRE85khmEhmPmeRCZaYaNNu1dTmrJe0SXrXLcduOU83zfZ9UWAFzuECrsCBW2jDI3SgCwRSeIN3+ECf6Ast0PdqtISKnTP4B/TzC7qfpuU=AAACKnicbVDLSsNAFL3T+qj10VaXboJFcFUSEXRZdOOygn1AE8pkMmmHTiZhZiKW0C9xqzu/xl1x64c4abPQtgcGDufcyz1z/IQzpW17gUrlnd29/cpB9fDo+KRWb5z2VJxKQrsk5rEc+FhRzgTtaqY5HSSS4sjntO9PH3K//0KlYrF41rOEehEeCxYygrWRRvXaq8uEG2E9IZhng/mo3rRb9hLWJnEK0oQCnVEDld0gJmlEhSYcKzV07ER7GZaaEU7nVTdVNMFkisd0aKjAEVVetkw+ty6NElhhLM0T2lqqfzcyHCk1i3wzmWdU614ubvVyRapQbTUDlV9bi6bDOy9jIkk1FWSVLEy5pWMr780KmKRE85khmEhmPmeRCZaYaNNu1dTmrJe0SXrXLcduOU83zfZ9UWAFzuECrsCBW2jDI3SgCwRSeIN3+ECf6Ast0PdqtISKnTP4B/TzC7qfpuU=AAACKnicbVDLSsNAFL3T+qj10VaXboJFcFUSEXRZdOOygn1AE8pkMmmHTiZhZiKW0C9xqzu/xl1x64c4abPQtgcGDufcyz1z/IQzpW17gUrlnd29/cpB9fDo+KRWb5z2VJxKQrsk5rEc+FhRzgTtaqY5HSSS4sjntO9PH3K//0KlYrF41rOEehEeCxYygrWRRvXaq8uEG2E9IZhng/mo3rRb9hLWJnEK0oQCnVEDld0gJmlEhSYcKzV07ER7GZaaEU7nVTdVNMFkisd0aKjAEVVetkw+ty6NElhhLM0T2lqqfzcyHCk1i3wzmWdU614ubvVyRapQbTUDlV9bi6bDOy9jIkk1FWSVLEy5pWMr780KmKRE85khmEhmPmeRCZaYaNNu1dTmrJe0SXrXLcduOU83zfZ9UWAFzuECrsCBW2jDI3SgCwRSeIN3+ECf6Ast0PdqtISKnTP4B/TzC7qfpuU=(cid:42)(cid:69)(cid:70)(cid:79)(cid:85)(cid:74)(cid:85)(cid:90)(cid:42)(cid:69)(cid:70)(cid:79)(cid:85)(cid:74)(cid:85)(cid:90)(cid:48)(cid:81)(cid:70)(cid:83)(cid:66)(cid:85)(cid:74)(cid:80)(cid:79)(cid:1)(cid:19)(cid:1)(cid:9)(cid:66)(cid:86)(cid:85)(cid:80)(cid:68)(cid:80)(cid:79)(cid:85)(cid:83)(cid:66)(cid:84)(cid:85)(cid:10)\u00afO(\u2327)2AAACOnicbVDLSsNAFJ1o1VpfrS4FGSxC3ZSkCrosunFnBfuApoabyaQdOnkwMxFKyM6vcatLf8StO3HrBzhpu9DWAwOHc+5lzj1uzJlUpvlurKwW1tY3ipulre2d3b1yZb8jo0QQ2iYRj0TPBUk5C2lbMcVpLxYUApfTrju+zv3uIxWSReG9msR0EMAwZD4joLTklI9sF0RqB6BGBHh6m2VO2sge0pqtIDnNnHLVrJtT4GVizUkVzdFyKkbB9iKSBDRUhIOUfcuM1SAFoRjhNCvZiaQxkDEMaV/TEAIqB+n0kAyfaMXDfiT0CxWeqr83UgiknASunswTy0UvF//1ckVIX2oTL7uezL9byKb8y0HKwjhRNCSzaH7CsYpw3iP2mKBE8YkmQATT12EyAgFE6bZLujdrsaVl0mnUrbN64+682ryaN1hEh+gY1ZCFLlAT3aAWaiOCntAzekGvxpvxYXwaX7PRFWO+c4D+wPj+AbY2rVA=\u00afO(\u2327)2AAACOnicbVDLSsNAFJ1o1VpfrS4FGSxC3ZSkCrosunFnBfuApoabyaQdOnkwMxFKyM6vcatLf8StO3HrBzhpu9DWAwOHc+5lzj1uzJlUpvlurKwW1tY3ipulre2d3b1yZb8jo0QQ2iYRj0TPBUk5C2lbMcVpLxYUApfTrju+zv3uIxWSReG9msR0EMAwZD4joLTklI9sF0RqB6BGBHh6m2VO2sge0pqtIDnNnHLVrJtT4GVizUkVzdFyKkbB9iKSBDRUhIOUfcuM1SAFoRjhNCvZiaQxkDEMaV/TEAIqB+n0kAyfaMXDfiT0CxWeqr83UgiknASunswTy0UvF//1ckVIX2oTL7uezL9byKb8y0HKwjhRNCSzaH7CsYpw3iP2mKBE8YkmQATT12EyAgFE6bZLujdrsaVl0mnUrbN64+682ryaN1hEh+gY1ZCFLlAT3aAWaiOCntAzekGvxpvxYXwaX7PRFWO+c4D+wPj+AbY2rVA=\u00afO(\u2327)1AAACOnicbVDLSsNAFJ1o1VpfrS4FGSxC3ZSkCrosunFnBfuAJpab6aQdOnkwMxFKyM6vcatLf8StO3HrBzhps9DWAwOHc+5lzj1uxJlUpvlurKwW1tY3ipulre2d3b1yZb8jw1gQ2iYhD0XPBUk5C2hbMcVpLxIUfJfTrju5zvzuIxWShcG9mkbU8WEUMI8RUFoalI9sF0Ri+6DGBHhym6aDxEofkpqtID5NB+WqWTdnwMvEykkV5WgNKkbBHoYk9mmgCAcp+5YZKScBoRjhNC3ZsaQRkAmMaF/TAHwqnWR2SIpPtDLEXij0CxSeqb83EvClnPqunswSy0UvE//1MkVIT2oTL7tDmX23kE15l07CgihWNCDzaF7MsQpx1iMeMkGJ4lNNgAimr8NkDAKI0m2XdG/WYkvLpNOoW2f1xt15tXmVN1hEh+gY1ZCFLlAT3aAWaiOCntAzekGvxpvxYXwaX/PRFSPfOUB/YHz/ALR1rU8=(cid:48)(cid:81)(cid:70)(cid:83)(cid:66)(cid:85)(cid:74)(cid:80)(cid:79)(cid:1)(cid:19)(cid:1)(cid:9)(cid:66)(cid:86)(cid:85)(cid:80)(cid:68)(cid:80)(cid:79)(cid:85)(cid:83)(cid:66)(cid:84)(cid:85)(cid:10)p1AAACH3icbVBNTwIxFHxVVMQv0KOXRmLiieyiiR6JXjxi4gIJbEi3dKGh2920XROy4Td41aO/xpvxyr+xC3tQcJImk5n38qYTJIJr4zgLtLVd2tndK+9XDg6Pjk+qtdOOjlNFmUdjEateQDQTXDLPcCNYL1GMRIFg3WD6kPvdF6Y0j+WzmSXMj8hY8pBTYqzkJcPMnQ+rdafhLIE3iVuQOhRoD2uoNBjFNI2YNFQQrfuukxg/I8pwKti8Mkg1SwidkjHrWypJxLSfLdPO8aVVRjiMlX3S4KX6eyMjkdazKLCTETETve7l4r9erigdamviTXek83Nr2Ux452dcJqlhkq6ihanAJsZ5WXjEFaNGzCwhVHH7O0wnRBFqbKUV25u73tIm6TQb7nWj+XRTb90XDZbhHC7gCly4hRY8Qhs8oMDhFd7gHX2gT/SFvlejW6jYOYM/QIsf8YuiQQ==1p1AAACIXicbVBNSwMxFHypVWv9avXoJVgEL5bdKuix6MVjBfsB7VKyabYNzWaXJCuUpT/Cqx79Nd7Em/hnzLZ70NaBwDDzHm8yfiy4No7zhQobxc2t7dJOeXdv/+CwUj3q6ChRlLVpJCLV84lmgkvWNtwI1osVI6EvWNef3mV+94kpzSP5aGYx80IyljzglBgrdd2LeJi682Gl5tSdBfA6cXNSgxytYRUVB6OIJiGThgqidd91YuOlRBlOBZuXB4lmMaFTMmZ9SyUJmfbSRd45PrPKCAeRsk8avFB/b6Qk1HoW+nYyJGaiV71M/NfLFKUDbU287o50dm4lmwluvJTLODFM0mW0IBHYRDirC4+4YtSImSWEKm5/h+mEKEKNLbVse3NXW1onnUbdvaw3Hq5qzdu8wRKcwCmcgwvX0IR7aEEbKEzhGV7gFb2hd/SBPpejBZTvHMMfoO8f3xOisw==1p2AAACIXicbVBNSwMxFHypVWv9avXoJVgEL5bdKuix6MVjBfsB7VKyabYNzWaXJCuUpT/Cqx79Nd7Em/hnzLZ70NaBwDDzHm8yfiy4No7zhQobxc2t7dJOeXdv/+CwUj3q6ChRlLVpJCLV84lmgkvWNtwI1osVI6EvWNef3mV+94kpzSP5aGYx80IyljzglBgrdd2LeJg25sNKzak7C+B14uakBjlawyoqDkYRTUImDRVE677rxMZLiTKcCjYvDxLNYkKnZMz6lkoSMu2li7xzfGaVEQ4iZZ80eKH+3khJqPUs9O1kSMxEr3qZ+K+XKUoH2pp43R3p7NxKNhPceCmXcWKYpMtoQSKwiXBWFx5xxagRM0sIVdz+DtMJUYQaW2rZ9uautrROOo26e1lvPFzVmrd5gyU4gVM4BxeuoQn30II2UJjCM7zAK3pD7+gDfS5HCyjfOYY/QN8/4MuitA==p2AAACH3icbVBNTwIxFHxVVMQv0KOXRmLiieyiiR6JXjxi4gIJbEi3dKGh2920XROy4Td41aO/xpvxyr+xC3tQcJImk5n38qYTJIJr4zgLtLVd2tndK+9XDg6Pjk+qtdOOjlNFmUdjEateQDQTXDLPcCNYL1GMRIFg3WD6kPvdF6Y0j+WzmSXMj8hY8pBTYqzkJcOsOR9W607DWQJvErcgdSjQHtZQaTCKaRoxaaggWvddJzF+RpThVLB5ZZBqlhA6JWPWt1SSiGk/W6ad40urjHAYK/ukwUv190ZGIq1nUWAnI2Imet3LxX+9XFE61NbEm+5I5+fWspnwzs+4TFLDJF1FC1OBTYzzsvCIK0aNmFlCqOL2d5hOiCLU2Eortjd3vaVN0mk23OtG8+mm3rovGizDOVzAFbhwCy14hDZ4QIHDK7zBO/pAn+gLfa9Gt1CxcwZ/gBY/80OiQg==p2AAACH3icbVBNTwIxFHxVVMQv0KOXRmLiieyiiR6JXjxi4gIJbEi3dKGh2920XROy4Td41aO/xpvxyr+xC3tQcJImk5n38qYTJIJr4zgLtLVd2tndK+9XDg6Pjk+qtdOOjlNFmUdjEateQDQTXDLPcCNYL1GMRIFg3WD6kPvdF6Y0j+WzmSXMj8hY8pBTYqzkJcOsOR9W607DWQJvErcgdSjQHtZQaTCKaRoxaaggWvddJzF+RpThVLB5ZZBqlhA6JWPWt1SSiGk/W6ad40urjHAYK/ukwUv190ZGIq1nUWAnI2Imet3LxX+9XFE61NbEm+5I5+fWspnwzs+4TFLDJF1FC1OBTYzzsvCIK0aNmFlCqOL2d5hOiCLU2Eortjd3vaVN0mk23OtG8+mm3rovGizDOVzAFbhwCy14hDZ4QIHDK7zBO/pAn+gLfa9Gt1CxcwZ/gBY/80OiQg==1p2AAACIXicbVBNSwMxFHypVWv9avXoJVgEL5bdKuix6MVjBfsB7VKyabYNzWaXJCuUpT/Cqx79Nd7Em/hnzLZ70NaBwDDzHm8yfiy4No7zhQobxc2t7dJOeXdv/+CwUj3q6ChRlLVpJCLV84lmgkvWNtwI1osVI6EvWNef3mV+94kpzSP5aGYx80IyljzglBgrdd2LeJg25sNKzak7C+B14uakBjlawyoqDkYRTUImDRVE677rxMZLiTKcCjYvDxLNYkKnZMz6lkoSMu2li7xzfGaVEQ4iZZ80eKH+3khJqPUs9O1kSMxEr3qZ+K+XKUoH2pp43R3p7NxKNhPceCmXcWKYpMtoQSKwiXBWFx5xxagRM0sIVdz+DtMJUYQaW2rZ9uautrROOo26e1lvPFzVmrd5gyU4gVM4BxeuoQn30II2UJjCM7zAK3pD7+gDfS5HCyjfOYY/QN8/4MuitA==O1(x;1)AAACOnicbVDLSsNAFJ3UqrW+Wl0KMliEuilJFRTcFN24s4J9QBPCZDJph04mYWYilpCdX+NWl/6IW3fi1g9w0nahrRcGzpxzD/fe48WMSmWa70Zhpbi6tl7aKG9ube/sVqp7XRklApMOjlgk+h6ShFFOOooqRvqxICj0GOl54+tc7z0QIWnE79UkJk6IhpwGFCOlKbdyaIdIjTBi6W3mplZWf7y0mbb7KP+duJWa2TCnBZeBNQc1MK+2WzWKth/hJCRcYYakHFhmrJwUCUUxI1nZTiSJER6jIRloyFFIpJNOD8ngsWZ8GERCP67glP3tSFEo5ST0dGe+tlzUcvJfLWeEDKQW4bLqy3zcwm4quHBSyuNEEY5nqwUJgyqCeY7Qp4JgxSYaICyovg7iERIIK512WedmLaa0DLrNhnXaaN6d1VpX8wRL4AAcgTqwwDlogRvQBh2AwRN4Bi/g1XgzPoxP42vWWjDmnn3wp4zvH7yerMI=O2(x;2)AAACOnicbVDLSsNAFJ3UqrW+Wl0KMliEuilJFRTcFN24s4J9QBPCZDJph04mYWYilpCdX+NWl/6IW3fi1g9w0nahrRcGzpxzD/fe48WMSmWa70Zhpbi6tl7aKG9ube/sVqp7XRklApMOjlgk+h6ShFFOOooqRvqxICj0GOl54+tc7z0QIWnE79UkJk6IhpwGFCOlKbdyaIdIjTBi6W3mps2s/nhpM233Uf47cSs1s2FOCy4Daw5qYF5tt2oUbT/CSUi4wgxJObDMWDkpEopiRrKynUgSIzxGQzLQkKOQSCedHpLBY834MIiEflzBKfvbkaJQykno6c58bbmo5eS/Ws4IGUgtwmXVl/m4hd1UcOGklMeJIhzPVgsSBlUE8xyhTwXBik00QFhQfR3EIyQQVjrtss7NWkxpGXSbDeu00bw7q7Wu5gmWwAE4AnVggXPQAjegDToAgyfwDF7Aq/FmfBifxtestWDMPfvgTxnfP8AerMQ=O2(\u02dcx;2)AAACQnicbVC7TsMwFHVKgVJeLYwsFhWoLFVSkEBiqWBho0j0ITVR5DhOa9VxIttBVFH+gK9hhZGf4BfYECsDTtsBWo5k6eice6/vPV7MqFSm+W4UVoqra+uljfLm1vbObqW615VRIjDp4IhFou8hSRjlpKOoYqQfC4JCj5GeN77O/d4DEZJG/F5NYuKEaMhpQDFSWnIrx3aI1Agjlt5mbtrM6raizCfpY3ZpMz3GR7l64lZqZsOcAi4Ta05qYI62WzWKth/hJCRcYYakHFhmrJwUCUUxI1nZTiSJER6jIRloylFIpJNOD8rgkVZ8GERCP67gVP3dkaJQykno6cp8fbno5eK/Xq4IGUhtwmXXl/l3C7up4MJJKY8TRTierRYkDKoI5nlCnwqCFZtogrCg+jqIR0ggrHTqZZ2btZjSMuk2G9Zpo3l3VmtdzRMsgQNwCOrAAuegBW5AG3QABk/gGbyAV+PN+DA+ja9ZacGY9+yDPzC+fwAsobB6(cid:79)(cid:80)(cid:1)(cid:66)(cid:86)(cid:72)(cid:78)(cid:70)(cid:79)(cid:85)(cid:60)(cid:66)(cid:86)(cid:85)(cid:80)(cid:68)(cid:80)(cid:79)(cid:85)(cid:83)(cid:66)(cid:84)(cid:85)(cid:62)(cid:60)(cid:68)(cid:86)(cid:85)(cid:80)(cid:86)(cid:85)(cid:62)(cid:60)(cid:68)(cid:86)(cid:85)(cid:80)(cid:86)(cid:85)(cid:13)(cid:1)(cid:66)(cid:86)(cid:85)(cid:80)(cid:68)(cid:80)(cid:79)(cid:85)(cid:83)(cid:66)(cid:84)(cid:85)(cid:62)\fFigure 2: An overall procedure of augmentation search by Fast AutoAugment algorithm. For\nexploration, the proposed method splits the train dataset Dtrain into K-folds, which consists of two\ndatasets D(k)\n. After training \u03b8,\nM\nthe algorithm evaluates B bundles of augmentation policies on DA. During the exploration process,\nthe proposed algorithm does not train model parameter \u03b8 from scratch again. The top-N policies\nobtained from each K-fold are appended to an augmentation list T\u2217.\n\n. Then model parameter \u03b8 is trained in parallel on each D(k)\nM\n\nand D(k)\nA\n\nOur search space is similar to previous methods except that we use both continuous values of\nprobability p and magnitude \u03bb at [0, 1] which has more possibilities than discretized search space.\n\n3.2 Search Strategy\n\nIn Fast AutoAugment, we consider searching the augmentation policy as a density matching between\na pair of train datasets. Let D be a probability distribution on X \u00d7Y and assume dataset D is sampled\nfrom this distribution. For a given classi\ufb01cation model M(\u00b7|\u03b8) : X \u2192 Y that is parameterized by \u03b8,\nthe expected accuracy and the expected loss of model M(\u00b7|\u03b8) on dataset D are denoted by R(\u03b8|D)\nand L(\u03b8|D), respectively. For a given augmentation policy T , L(\u03b8|T (D)) denotes the expected loss\nof model for augmented images of data by (2). Note that the value of the loss for \ufb01xed policy T can\nvary according to the randomness in sub-policies due to (1).\n\n3.2.1 Ef\ufb01cient Density Matching for Augmentation Policy Search\n\nFor any given pair of Dtrain and Dvalid, our goal is to improve the generalization ability by searching\nthe augmentation policies that match the density of Dtrain with density of augmented Dvalid. However,\nit is impractical to compare these two distributions directly for an evaluation of every candidate\npolicy. Therefore, we perform this evaluation by measuring how much one dataset follows the\npattern of the other by making use of the model predictions on both datasets. In detail, let us split\nDtrain = DM \u222a DA into DM and DA that are used for learning the model parameter \u03b8 and exploring\nthe augmentation policy T , respectively. We employ the following objective to \ufb01nd a set of learned\naugmentation policies T(cid:63)\n\nT\u2217 = argmax\n\nT R(\u03b8\u2217|T (DA))\n\n(3)\n\nwhere model parameter \u03b8\u2217 is trained on DM. It is noted that in this objective, T\u2217 approximately\nminimizes the distance between density of DM and density of T (DA) from the perspective of\nmaximizing the performance of both model predictions with the same parameter \u03b8. The proposed\nsearch objective pursues to \ufb01nd label-preserving transformations that generates unseen but plausible\nmissing data samples. Namely, it does not transform but augment the data space which has to be\ncorrectly predicted by a classi\ufb01cation network for better generalization. This perspective is also inline\nwith the motivation of Bayesian DA [36]. In practice, we minimize the categorical cross-entropy loss\nL(\u03b8|T (DA)) instead of maximizing accuracy in (3).\n\n4\n\nDtrainAAACLXicbVDLSsNAFJ1pfdT6auvSzWARXJVEBF0WdeGygn1AG8JkMmmHTiZhZiKWkF9xqzu/xoUgbv0NJ2kWWntg4HDOvdwzx4s5U9qyPmClurG5tV3bqe/u7R8cNpqtgYoSSWifRDySIw8rypmgfc00p6NYUhx6nA69+U3uDx+pVCwSD3oRUyfEU8ECRrA2ktto3brpJMR6poJUS8xElrmNttWxCqD/xC5JG5TouU1YnfgRSUIqNOFYqbFtxdpJsdSMcJrVJ4miMSZzPKVjQwUOqXLSInyGTo3ioyCS5gmNCvX3RopDpRahZyaLnKteLq71ckWqQK01fZVfW4mmgysnZSJONBVkmSxIONIRyqtDPpOUaL4wBBPJzOcQmWGJiTYF101t9mpJ/8ngvGNbHfv+ot29LgusgWNwAs6ADS5BF9yBHugDAp7AM3gBr/ANvsNP+LUcrcBy5wj8Afz+AZXLqGI=AAACLXicbVDLSsNAFJ1pfdT6auvSzWARXJVEBF0WdeGygn1AG8JkMmmHTiZhZiKWkF9xqzu/xoUgbv0NJ2kWWntg4HDOvdwzx4s5U9qyPmClurG5tV3bqe/u7R8cNpqtgYoSSWifRDySIw8rypmgfc00p6NYUhx6nA69+U3uDx+pVCwSD3oRUyfEU8ECRrA2ktto3brpJMR6poJUS8xElrmNttWxCqD/xC5JG5TouU1YnfgRSUIqNOFYqbFtxdpJsdSMcJrVJ4miMSZzPKVjQwUOqXLSInyGTo3ioyCS5gmNCvX3RopDpRahZyaLnKteLq71ckWqQK01fZVfW4mmgysnZSJONBVkmSxIONIRyqtDPpOUaL4wBBPJzOcQmWGJiTYF101t9mpJ/8ngvGNbHfv+ot29LgusgWNwAs6ADS5BF9yBHugDAp7AM3gBr/ANvsNP+LUcrcBy5wj8Afz+AZXLqGI=AAACLXicbVDLSsNAFJ1pfdT6auvSzWARXJVEBF0WdeGygn1AG8JkMmmHTiZhZiKWkF9xqzu/xoUgbv0NJ2kWWntg4HDOvdwzx4s5U9qyPmClurG5tV3bqe/u7R8cNpqtgYoSSWifRDySIw8rypmgfc00p6NYUhx6nA69+U3uDx+pVCwSD3oRUyfEU8ECRrA2ktto3brpJMR6poJUS8xElrmNttWxCqD/xC5JG5TouU1YnfgRSUIqNOFYqbFtxdpJsdSMcJrVJ4miMSZzPKVjQwUOqXLSInyGTo3ioyCS5gmNCvX3RopDpRahZyaLnKteLq71ckWqQK01fZVfW4mmgysnZSJONBVkmSxIONIRyqtDPpOUaL4wBBPJzOcQmWGJiTYF101t9mpJ/8ngvGNbHfv+ot29LgusgWNwAs6ADS5BF9yBHugDAp7AM3gBr/ANvsNP+LUcrcBy5wj8Afz+AZXLqGI=AAACLXicbVDLSsNAFJ1pfdT6auvSzWARXJVEBF0WdeGygn1AG8JkMmmHTiZhZiKWkF9xqzu/xoUgbv0NJ2kWWntg4HDOvdwzx4s5U9qyPmClurG5tV3bqe/u7R8cNpqtgYoSSWifRDySIw8rypmgfc00p6NYUhx6nA69+U3uDx+pVCwSD3oRUyfEU8ECRrA2ktto3brpJMR6poJUS8xElrmNttWxCqD/xC5JG5TouU1YnfgRSUIqNOFYqbFtxdpJsdSMcJrVJ4miMSZzPKVjQwUOqXLSInyGTo3ioyCS5gmNCvX3RopDpRahZyaLnKteLq71ckWqQK01fZVfW4mmgysnZSJONBVkmSxIONIRyqtDPpOUaL4wBBPJzOcQmWGJiTYF101t9mpJ/8ngvGNbHfv+ot29LgusgWNwAs6ADS5BF9yBHugDAp7AM3gBr/ANvsNP+LUcrcBy5wj8Afz+AZXLqGI=D(1)trainAAACM3icbVDLSsNAFJ20Pmp9tboR3AwWoW5KIoIui7pwWcE+oI1hMpm0QyeTMDMRSohf41Z3foy4E7f+g5M0C217YOBwzr3MuceNGJXKND+MUnltfWOzslXd3tnd26/VD3oyjAUmXRyyUAxcJAmjnHQVVYwMIkFQ4DLSd6c3md9/IkLSkD+oWUTsAI059SlGSktO7ejWSUYBUhPpJ0ogytP0MWlaZ6lTa5gtMwdcJlZBGqBAx6kb5ZEX4jggXGGGpBxaZqTsBAlFMSNpdRRLEiE8RWMy1JSjgEg7yU9I4alWPOiHQj+uYK7+3UhQIOUscPVknnbRy8SVXqYI6cuVpiez3xaiKf/KTiiPYkU4nifzYwZVCLMCoUcFwYrNNEFYUH0cxBMkEFa65qquzVosaZn0zluW2bLuLxrt66LACjgGJ6AJLHAJ2uAOdEAXYPAMXsAreDPejU/jy/iej5aMYucQ/IPx8wvP5Kp2AAACM3icbVDLSsNAFJ20Pmp9tboR3AwWoW5KIoIui7pwWcE+oI1hMpm0QyeTMDMRSohf41Z3foy4E7f+g5M0C217YOBwzr3MuceNGJXKND+MUnltfWOzslXd3tnd26/VD3oyjAUmXRyyUAxcJAmjnHQVVYwMIkFQ4DLSd6c3md9/IkLSkD+oWUTsAI059SlGSktO7ejWSUYBUhPpJ0ogytP0MWlaZ6lTa5gtMwdcJlZBGqBAx6kb5ZEX4jggXGGGpBxaZqTsBAlFMSNpdRRLEiE8RWMy1JSjgEg7yU9I4alWPOiHQj+uYK7+3UhQIOUscPVknnbRy8SVXqYI6cuVpiez3xaiKf/KTiiPYkU4nifzYwZVCLMCoUcFwYrNNEFYUH0cxBMkEFa65qquzVosaZn0zluW2bLuLxrt66LACjgGJ6AJLHAJ2uAOdEAXYPAMXsAreDPejU/jy/iej5aMYucQ/IPx8wvP5Kp2AAACM3icbVDLSsNAFJ20Pmp9tboR3AwWoW5KIoIui7pwWcE+oI1hMpm0QyeTMDMRSohf41Z3foy4E7f+g5M0C217YOBwzr3MuceNGJXKND+MUnltfWOzslXd3tnd26/VD3oyjAUmXRyyUAxcJAmjnHQVVYwMIkFQ4DLSd6c3md9/IkLSkD+oWUTsAI059SlGSktO7ejWSUYBUhPpJ0ogytP0MWlaZ6lTa5gtMwdcJlZBGqBAx6kb5ZEX4jggXGGGpBxaZqTsBAlFMSNpdRRLEiE8RWMy1JSjgEg7yU9I4alWPOiHQj+uYK7+3UhQIOUscPVknnbRy8SVXqYI6cuVpiez3xaiKf/KTiiPYkU4nifzYwZVCLMCoUcFwYrNNEFYUH0cxBMkEFa65qquzVosaZn0zluW2bLuLxrt66LACjgGJ6AJLHAJ2uAOdEAXYPAMXsAreDPejU/jy/iej5aMYucQ/IPx8wvP5Kp2AAACM3icbVDLSsNAFJ20Pmp9tboR3AwWoW5KIoIui7pwWcE+oI1hMpm0QyeTMDMRSohf41Z3foy4E7f+g5M0C217YOBwzr3MuceNGJXKND+MUnltfWOzslXd3tnd26/VD3oyjAUmXRyyUAxcJAmjnHQVVYwMIkFQ4DLSd6c3md9/IkLSkD+oWUTsAI059SlGSktO7ejWSUYBUhPpJ0ogytP0MWlaZ6lTa5gtMwdcJlZBGqBAx6kb5ZEX4jggXGGGpBxaZqTsBAlFMSNpdRRLEiE8RWMy1JSjgEg7yU9I4alWPOiHQj+uYK7+3UhQIOUscPVknnbRy8SVXqYI6cuVpiez3xaiKf/KTiiPYkU4nifzYwZVCLMCoUcFwYrNNEFYUH0cxBMkEFa65qquzVosaZn0zluW2bLuLxrt66LACjgGJ6AJLHAJ2uAOdEAXYPAMXsAreDPejU/jy/iej5aMYucQ/IPx8wvP5Kp2D(K)trainAAACM3icbVDLSsNAFJ1YH7W+Wt0IbgaLUDclEUGXRV0IbirYB7QxTCaTduhkEmYmQgnxa9zqzo8Rd+LWf3CSZqFtDwwczrmXOfe4EaNSmeaHsVJaXVvfKG9WtrZ3dveqtf2uDGOBSQeHLBR9F0nCKCcdRRUj/UgQFLiM9NzJdeb3noiQNOQPahoRO0AjTn2KkdKSUz28cZJhgNRY+okSiPI0fUwad6epU62bTTMHXCRWQeqgQNupGaWhF+I4IFxhhqQcWGak7AQJRTEjaWUYSxIhPEEjMtCUo4BIO8lPSOGJVjzoh0I/rmCu/t1IUCDlNHD1ZJ523svEpV6mCOnLpaYns9/moin/0k4oj2JFOJ4l82MGVQizAqFHBcGKTTVBWFB9HMRjJBBWuuaKrs2aL2mRdM+altm07s/rrauiwDI4AsegASxwAVrgFrRBB2DwDF7AK3gz3o1P48v4no2uGMXOAfgH4+cX/JSqkA==AAACM3icbVDLSsNAFJ1YH7W+Wt0IbgaLUDclEUGXRV0IbirYB7QxTCaTduhkEmYmQgnxa9zqzo8Rd+LWf3CSZqFtDwwczrmXOfe4EaNSmeaHsVJaXVvfKG9WtrZ3dveqtf2uDGOBSQeHLBR9F0nCKCcdRRUj/UgQFLiM9NzJdeb3noiQNOQPahoRO0AjTn2KkdKSUz28cZJhgNRY+okSiPI0fUwad6epU62bTTMHXCRWQeqgQNupGaWhF+I4IFxhhqQcWGak7AQJRTEjaWUYSxIhPEEjMtCUo4BIO8lPSOGJVjzoh0I/rmCu/t1IUCDlNHD1ZJ523svEpV6mCOnLpaYns9/moin/0k4oj2JFOJ4l82MGVQizAqFHBcGKTTVBWFB9HMRjJBBWuuaKrs2aL2mRdM+altm07s/rrauiwDI4AsegASxwAVrgFrRBB2DwDF7AK3gz3o1P48v4no2uGMXOAfgH4+cX/JSqkA==AAACM3icbVDLSsNAFJ1YH7W+Wt0IbgaLUDclEUGXRV0IbirYB7QxTCaTduhkEmYmQgnxa9zqzo8Rd+LWf3CSZqFtDwwczrmXOfe4EaNSmeaHsVJaXVvfKG9WtrZ3dveqtf2uDGOBSQeHLBR9F0nCKCcdRRUj/UgQFLiM9NzJdeb3noiQNOQPahoRO0AjTn2KkdKSUz28cZJhgNRY+okSiPI0fUwad6epU62bTTMHXCRWQeqgQNupGaWhF+I4IFxhhqQcWGak7AQJRTEjaWUYSxIhPEEjMtCUo4BIO8lPSOGJVjzoh0I/rmCu/t1IUCDlNHD1ZJ523svEpV6mCOnLpaYns9/moin/0k4oj2JFOJ4l82MGVQizAqFHBcGKTTVBWFB9HMRjJBBWuuaKrs2aL2mRdM+altm07s/rrauiwDI4AsegASxwAVrgFrRBB2DwDF7AK3gz3o1P48v4no2uGMXOAfgH4+cX/JSqkA==AAACM3icbVDLSsNAFJ1YH7W+Wt0IbgaLUDclEUGXRV0IbirYB7QxTCaTduhkEmYmQgnxa9zqzo8Rd+LWf3CSZqFtDwwczrmXOfe4EaNSmeaHsVJaXVvfKG9WtrZ3dveqtf2uDGOBSQeHLBR9F0nCKCcdRRUj/UgQFLiM9NzJdeb3noiQNOQPahoRO0AjTn2KkdKSUz28cZJhgNRY+okSiPI0fUwad6epU62bTTMHXCRWQeqgQNupGaWhF+I4IFxhhqQcWGak7AQJRTEjaWUYSxIhPEEjMtCUo4BIO8lPSOGJVjzoh0I/rmCu/t1IUCDlNHD1ZJ523svEpV6mCOnLpaYns9/moin/0k4oj2JFOJ4l82MGVQizAqFHBcGKTTVBWFB9HMRjJBBWuuaKrs2aL2mRdM+altm07s/rrauiwDI4AsegASxwAVrgFrRBB2DwDF7AK3gz3o1P48v4no2uGMXOAfgH4+cX/JSqkA==M(\u2713)AAACLnicbVDLSsNAFJ20Pmp9tXbpJliEuimJCLosunEjVLAPaEKZTCbt0MkkzNwIIfRb3OrOrxFciFs/w0mbhbY9MHA4517umePFnCmwrE+jVN7a3tmt7FX3Dw6Pjmv1k76KEkloj0Q8kkMPK8qZoD1gwOkwlhSHHqcDb3aX+4NnKhWLxBOkMXVDPBEsYASDlsa1hhNimBLMs4d5y4EpBXwxrjWttrWAuU7sgjRRge64bpQdPyJJSAUQjpUa2VYMboYlMMLpvOokisaYzPCEjjQVOKTKzRbp5+a5VnwziKR+AsyF+ncjw6FSaejpyTyrWvVycaOXK1IFaqPpq/zaSjQIbtyMiTgBKsgyWZBwEyIz7870maQEeKoJJpLpz5lkiiUmoBuu6trs1ZLWSf+ybVtt+/Gq2bktCqygU3SGWshG16iD7lEX9RBBKXpBr+jNeDc+jC/jezlaMoqdBvoH4+cXNOGoGg==AAACLnicbVDLSsNAFJ20Pmp9tXbpJliEuimJCLosunEjVLAPaEKZTCbt0MkkzNwIIfRb3OrOrxFciFs/w0mbhbY9MHA4517umePFnCmwrE+jVN7a3tmt7FX3Dw6Pjmv1k76KEkloj0Q8kkMPK8qZoD1gwOkwlhSHHqcDb3aX+4NnKhWLxBOkMXVDPBEsYASDlsa1hhNimBLMs4d5y4EpBXwxrjWttrWAuU7sgjRRge64bpQdPyJJSAUQjpUa2VYMboYlMMLpvOokisaYzPCEjjQVOKTKzRbp5+a5VnwziKR+AsyF+ncjw6FSaejpyTyrWvVycaOXK1IFaqPpq/zaSjQIbtyMiTgBKsgyWZBwEyIz7870maQEeKoJJpLpz5lkiiUmoBuu6trs1ZLWSf+ybVtt+/Gq2bktCqygU3SGWshG16iD7lEX9RBBKXpBr+jNeDc+jC/jezlaMoqdBvoH4+cXNOGoGg==AAACLnicbVDLSsNAFJ20Pmp9tXbpJliEuimJCLosunEjVLAPaEKZTCbt0MkkzNwIIfRb3OrOrxFciFs/w0mbhbY9MHA4517umePFnCmwrE+jVN7a3tmt7FX3Dw6Pjmv1k76KEkloj0Q8kkMPK8qZoD1gwOkwlhSHHqcDb3aX+4NnKhWLxBOkMXVDPBEsYASDlsa1hhNimBLMs4d5y4EpBXwxrjWttrWAuU7sgjRRge64bpQdPyJJSAUQjpUa2VYMboYlMMLpvOokisaYzPCEjjQVOKTKzRbp5+a5VnwziKR+AsyF+ncjw6FSaejpyTyrWvVycaOXK1IFaqPpq/zaSjQIbtyMiTgBKsgyWZBwEyIz7870maQEeKoJJpLpz5lkiiUmoBuu6trs1ZLWSf+ybVtt+/Gq2bktCqygU3SGWshG16iD7lEX9RBBKXpBr+jNeDc+jC/jezlaMoqdBvoH4+cXNOGoGg==AAACLnicbVDLSsNAFJ20Pmp9tXbpJliEuimJCLosunEjVLAPaEKZTCbt0MkkzNwIIfRb3OrOrxFciFs/w0mbhbY9MHA4517umePFnCmwrE+jVN7a3tmt7FX3Dw6Pjmv1k76KEkloj0Q8kkMPK8qZoD1gwOkwlhSHHqcDb3aX+4NnKhWLxBOkMXVDPBEsYASDlsa1hhNimBLMs4d5y4EpBXwxrjWttrWAuU7sgjRRge64bpQdPyJJSAUQjpUa2VYMboYlMMLpvOokisaYzPCEjjQVOKTKzRbp5+a5VnwziKR+AsyF+ncjw6FSaejpyTyrWvVycaOXK1IFaqPpq/zaSjQIbtyMiTgBKsgyWZBwEyIz7870maQEeKoJJpLpz5lkiiUmoBuu6trs1ZLWSf+ybVtt+/Gq2bktCqygU3SGWshG16iD7lEX9RBBKXpBr+jNeDc+jC/jezlaMoqdBvoH4+cXNOGoGg==(cid:84)(cid:66)(cid:78)(cid:81)(cid:77)(cid:70)(cid:84)(cid:81)(cid:77)(cid:74)(cid:85)(cid:85)(cid:83)(cid:66)(cid:74)(cid:79)(cid:34)(cid:86)(cid:72)(cid:78)(cid:70)(cid:79)(cid:85)(cid:1)(cid:49)(cid:80)(cid:77)(cid:74)(cid:68)(cid:90)D(1)MAAACMHicbVDLSgMxFM20Pmp9tYorN8Ei1E2ZEUGXRV24ESrYB7TjkEkzbWgmMyQZoYT5GLe682t0JW79CjPtLLTtgcDhnHu5J8ePGZXKtj+tQnFtfWOztFXe3tnd269UDzoySgQmbRyxSPR8JAmjnLQVVYz0YkFQ6DPS9Sc3md99JkLSiD+qaUzcEI04DShGykhe5ejW04MQqTFGTN+n6ZOuO2epV6nZDXsGuEycnNRAjpZXtYqDYYSTkHCFGZKy79ixcjUSimJG0vIgkSRGeIJGpG8oRyGRrp7lT+GpUYYwiIR5XMGZ+ndDo1DKaeibySyqXPQycaWXKUIGcqU5lNm1hWgquHI15XGiCMfzZEHCoIpg1h4cUkGwYlNDEBbUfA7iMRIIK9Nx2dTmLJa0TDrnDcduOA8XteZ1XmAJHIMTUAcOuARNcAdaoA0w0OAFvII36936sL6s7/lowcp3DsE/WD+/u2Oo3g==AAACMHicbVDLSgMxFM20Pmp9tYorN8Ei1E2ZEUGXRV24ESrYB7TjkEkzbWgmMyQZoYT5GLe682t0JW79CjPtLLTtgcDhnHu5J8ePGZXKtj+tQnFtfWOztFXe3tnd269UDzoySgQmbRyxSPR8JAmjnLQVVYz0YkFQ6DPS9Sc3md99JkLSiD+qaUzcEI04DShGykhe5ejW04MQqTFGTN+n6ZOuO2epV6nZDXsGuEycnNRAjpZXtYqDYYSTkHCFGZKy79ixcjUSimJG0vIgkSRGeIJGpG8oRyGRrp7lT+GpUYYwiIR5XMGZ+ndDo1DKaeibySyqXPQycaWXKUIGcqU5lNm1hWgquHI15XGiCMfzZEHCoIpg1h4cUkGwYlNDEBbUfA7iMRIIK9Nx2dTmLJa0TDrnDcduOA8XteZ1XmAJHIMTUAcOuARNcAdaoA0w0OAFvII36936sL6s7/lowcp3DsE/WD+/u2Oo3g==AAACMHicbVDLSgMxFM20Pmp9tYorN8Ei1E2ZEUGXRV24ESrYB7TjkEkzbWgmMyQZoYT5GLe682t0JW79CjPtLLTtgcDhnHu5J8ePGZXKtj+tQnFtfWOztFXe3tnd269UDzoySgQmbRyxSPR8JAmjnLQVVYz0YkFQ6DPS9Sc3md99JkLSiD+qaUzcEI04DShGykhe5ejW04MQqTFGTN+n6ZOuO2epV6nZDXsGuEycnNRAjpZXtYqDYYSTkHCFGZKy79ixcjUSimJG0vIgkSRGeIJGpG8oRyGRrp7lT+GpUYYwiIR5XMGZ+ndDo1DKaeibySyqXPQycaWXKUIGcqU5lNm1hWgquHI15XGiCMfzZEHCoIpg1h4cUkGwYlNDEBbUfA7iMRIIK9Nx2dTmLJa0TDrnDcduOA8XteZ1XmAJHIMTUAcOuARNcAdaoA0w0OAFvII36936sL6s7/lowcp3DsE/WD+/u2Oo3g==AAACMHicbVDLSgMxFM20Pmp9tYorN8Ei1E2ZEUGXRV24ESrYB7TjkEkzbWgmMyQZoYT5GLe682t0JW79CjPtLLTtgcDhnHu5J8ePGZXKtj+tQnFtfWOztFXe3tnd269UDzoySgQmbRyxSPR8JAmjnLQVVYz0YkFQ6DPS9Sc3md99JkLSiD+qaUzcEI04DShGykhe5ejW04MQqTFGTN+n6ZOuO2epV6nZDXsGuEycnNRAjpZXtYqDYYSTkHCFGZKy79ixcjUSimJG0vIgkSRGeIJGpG8oRyGRrp7lT+GpUYYwiIR5XMGZ+ndDo1DKaeibySyqXPQycaWXKUIGcqU5lNm1hWgquHI15XGiCMfzZEHCoIpg1h4cUkGwYlNDEBbUfA7iMRIIK9Nx2dTmLJa0TDrnDcduOA8XteZ1XmAJHIMTUAcOuARNcAdaoA0w0OAFvII36936sL6s7/lowcp3DsE/WD+/u2Oo3g==D(1)AAAACMHicbVDLSgMxFM20Pmp9tYorN8Ei1E2ZEUGX9bFwWcE+oB2HTJppQzOZIckIJczHuNWdX6MrcetXmGlnoW0PBA7n3Ms9OX7MqFS2/WkVimvrG5ulrfL2zu7efqV60JFRIjBp44hFoucjSRjlpK2oYqQXC4JCn5GuP7nN/O4zEZJG/FFNY+KGaMRpQDFSRvIqR3eeHoRIjTFi+jpNn3TdOUu9Ss1u2DPAZeLkpAZytLyqVRwMI5yEhCvMkJR9x46Vq5FQFDOSlgeJJDHCEzQifUM5Col09Sx/Ck+NMoRBJMzjCs7UvxsahVJOQ99MZlHlopeJK71METKQK82hzK4tRFPBlaspjxNFOJ4nCxIGVQSz9uCQCoIVmxqCsKDmcxCPkUBYmY7LpjZnsaRl0jlvOHbDebioNW/yAkvgGJyAOnDAJWiCe9ACbYCBBi/gFbxZ79aH9WV9z0cLVr5zCP7B+vkFpnuo0g==AAACMHicbVDLSgMxFM20Pmp9tYorN8Ei1E2ZEUGX9bFwWcE+oB2HTJppQzOZIckIJczHuNWdX6MrcetXmGlnoW0PBA7n3Ms9OX7MqFS2/WkVimvrG5ulrfL2zu7efqV60JFRIjBp44hFoucjSRjlpK2oYqQXC4JCn5GuP7nN/O4zEZJG/FFNY+KGaMRpQDFSRvIqR3eeHoRIjTFi+jpNn3TdOUu9Ss1u2DPAZeLkpAZytLyqVRwMI5yEhCvMkJR9x46Vq5FQFDOSlgeJJDHCEzQifUM5Col09Sx/Ck+NMoRBJMzjCs7UvxsahVJOQ99MZlHlopeJK71METKQK82hzK4tRFPBlaspjxNFOJ4nCxIGVQSz9uCQCoIVmxqCsKDmcxCPkUBYmY7LpjZnsaRl0jlvOHbDebioNW/yAkvgGJyAOnDAJWiCe9ACbYCBBi/gFbxZ79aH9WV9z0cLVr5zCP7B+vkFpnuo0g==AAACMHicbVDLSgMxFM20Pmp9tYorN8Ei1E2ZEUGX9bFwWcE+oB2HTJppQzOZIckIJczHuNWdX6MrcetXmGlnoW0PBA7n3Ms9OX7MqFS2/WkVimvrG5ulrfL2zu7efqV60JFRIjBp44hFoucjSRjlpK2oYqQXC4JCn5GuP7nN/O4zEZJG/FFNY+KGaMRpQDFSRvIqR3eeHoRIjTFi+jpNn3TdOUu9Ss1u2DPAZeLkpAZytLyqVRwMI5yEhCvMkJR9x46Vq5FQFDOSlgeJJDHCEzQifUM5Col09Sx/Ck+NMoRBJMzjCs7UvxsahVJOQ99MZlHlopeJK71METKQK82hzK4tRFPBlaspjxNFOJ4nCxIGVQSz9uCQCoIVmxqCsKDmcxCPkUBYmY7LpjZnsaRl0jlvOHbDebioNW/yAkvgGJyAOnDAJWiCe9ACbYCBBi/gFbxZ79aH9WV9z0cLVr5zCP7B+vkFpnuo0g==AAACMHicbVDLSgMxFM20Pmp9tYorN8Ei1E2ZEUGX9bFwWcE+oB2HTJppQzOZIckIJczHuNWdX6MrcetXmGlnoW0PBA7n3Ms9OX7MqFS2/WkVimvrG5ulrfL2zu7efqV60JFRIjBp44hFoucjSRjlpK2oYqQXC4JCn5GuP7nN/O4zEZJG/FFNY+KGaMRpQDFSRvIqR3eeHoRIjTFi+jpNn3TdOUu9Ss1u2DPAZeLkpAZytLyqVRwMI5yEhCvMkJR9x46Vq5FQFDOSlgeJJDHCEzQifUM5Col09Sx/Ck+NMoRBJMzjCs7UvxsahVJOQ99MZlHlopeJK71METKQK82hzK4tRFPBlaspjxNFOJ4nCxIGVQSz9uCQCoIVmxqCsKDmcxCPkUBYmY7LpjZnsaRl0jlvOHbDebioNW/yAkvgGJyAOnDAJWiCe9ACbYCBBi/gFbxZ79aH9WV9z0cLVr5zCP7B+vkFpnuo0g==D(K)MAAACMHicbVDLSgMxFE2sj1pfreLKTbAIdVNmRNBlUReCCBXsA9qxZDKZNjTzIMkIJczHuNWdX6MrcetXmGlnoW0PBA7n3Ms9OW7MmVSW9QlXCqtr6xvFzdLW9s7uXrmy35ZRIghtkYhHoutiSTkLaUsxxWk3FhQHLqcdd3yd+Z1nKiSLwkc1iakT4GHIfEawMtKgfHgz0P0AqxHBXN+n6ZOu3Z2mg3LVqltToEVi56QKcjQHFVjoexFJAhoqwrGUPduKlaOxUIxwmpb6iaQxJmM8pD1DQxxQ6ehp/hSdGMVDfiTMCxWaqn83NA6knASumcyiynkvE5d6mSKkL5eansyuzUVT/qWjWRgnioZklsxPOFIRytpDHhOUKD4xBBPBzOcQGWGBiTIdl0xt9nxJi6R9Vretuv1wXm1c5QUWwRE4BjVggwvQALegCVqAAA1ewCt4g+/wA37B79noCsx3DsA/wJ9f6BOo+A==AAACMHicbVDLSgMxFE2sj1pfreLKTbAIdVNmRNBlUReCCBXsA9qxZDKZNjTzIMkIJczHuNWdX6MrcetXmGlnoW0PBA7n3Ms9OW7MmVSW9QlXCqtr6xvFzdLW9s7uXrmy35ZRIghtkYhHoutiSTkLaUsxxWk3FhQHLqcdd3yd+Z1nKiSLwkc1iakT4GHIfEawMtKgfHgz0P0AqxHBXN+n6ZOu3Z2mg3LVqltToEVi56QKcjQHFVjoexFJAhoqwrGUPduKlaOxUIxwmpb6iaQxJmM8pD1DQxxQ6ehp/hSdGMVDfiTMCxWaqn83NA6knASumcyiynkvE5d6mSKkL5eansyuzUVT/qWjWRgnioZklsxPOFIRytpDHhOUKD4xBBPBzOcQGWGBiTIdl0xt9nxJi6R9Vretuv1wXm1c5QUWwRE4BjVggwvQALegCVqAAA1ewCt4g+/wA37B79noCsx3DsA/wJ9f6BOo+A==AAACMHicbVDLSgMxFE2sj1pfreLKTbAIdVNmRNBlUReCCBXsA9qxZDKZNjTzIMkIJczHuNWdX6MrcetXmGlnoW0PBA7n3Ms9OW7MmVSW9QlXCqtr6xvFzdLW9s7uXrmy35ZRIghtkYhHoutiSTkLaUsxxWk3FhQHLqcdd3yd+Z1nKiSLwkc1iakT4GHIfEawMtKgfHgz0P0AqxHBXN+n6ZOu3Z2mg3LVqltToEVi56QKcjQHFVjoexFJAhoqwrGUPduKlaOxUIxwmpb6iaQxJmM8pD1DQxxQ6ehp/hSdGMVDfiTMCxWaqn83NA6knASumcyiynkvE5d6mSKkL5eansyuzUVT/qWjWRgnioZklsxPOFIRytpDHhOUKD4xBBPBzOcQGWGBiTIdl0xt9nxJi6R9Vretuv1wXm1c5QUWwRE4BjVggwvQALegCVqAAA1ewCt4g+/wA37B79noCsx3DsA/wJ9f6BOo+A==AAACMHicbVDLSgMxFE2sj1pfreLKTbAIdVNmRNBlUReCCBXsA9qxZDKZNjTzIMkIJczHuNWdX6MrcetXmGlnoW0PBA7n3Ms9OW7MmVSW9QlXCqtr6xvFzdLW9s7uXrmy35ZRIghtkYhHoutiSTkLaUsxxWk3FhQHLqcdd3yd+Z1nKiSLwkc1iakT4GHIfEawMtKgfHgz0P0AqxHBXN+n6ZOu3Z2mg3LVqltToEVi56QKcjQHFVjoexFJAhoqwrGUPduKlaOxUIxwmpb6iaQxJmM8pD1DQxxQ6ehp/hSdGMVDfiTMCxWaqn83NA6knASumcyiynkvE5d6mSKkL5eansyuzUVT/qWjWRgnioZklsxPOFIRytpDHhOUKD4xBBPBzOcQGWGBiTIdl0xt9nxJi6R9Vretuv1wXm1c5QUWwRE4BjVggwvQALegCVqAAA1ewCt4g+/wA37B79noCsx3DsA/wJ9f6BOo+A==D(K)AAAACMHicbVDLSgMxFE2sj1pfreLKTbAIdVNmRNBlfSwENxXsA9qxZDKZNjTzIMkIJczHuNWdX6MrcetXmGlnoW0PBA7n3Ms9OW7MmVSW9QlXCqtr6xvFzdLW9s7uXrmy35ZRIghtkYhHoutiSTkLaUsxxWk3FhQHLqcdd3yT+Z1nKiSLwkc1iakT4GHIfEawMtKgfHg70P0AqxHBXF+l6ZOu3Z+mg3LVqltToEVi56QKcjQHFVjoexFJAhoqwrGUPduKlaOxUIxwmpb6iaQxJmM8pD1DQxxQ6ehp/hSdGMVDfiTMCxWaqn83NA6knASumcyiynkvE5d6mSKkL5eansyuzUVT/qWjWRgnioZklsxPOFIRytpDHhOUKD4xBBPBzOcQGWGBiTIdl0xt9nxJi6R9Vretuv1wXm1c5wUWwRE4BjVggwvQAHegCVqAAA1ewCt4g+/wA37B79noCsx3DsA/wJ9f0yuo7A==AAACMHicbVDLSgMxFE2sj1pfreLKTbAIdVNmRNBlfSwENxXsA9qxZDKZNjTzIMkIJczHuNWdX6MrcetXmGlnoW0PBA7n3Ms9OW7MmVSW9QlXCqtr6xvFzdLW9s7uXrmy35ZRIghtkYhHoutiSTkLaUsxxWk3FhQHLqcdd3yT+Z1nKiSLwkc1iakT4GHIfEawMtKgfHg70P0AqxHBXF+l6ZOu3Z+mg3LVqltToEVi56QKcjQHFVjoexFJAhoqwrGUPduKlaOxUIxwmpb6iaQxJmM8pD1DQxxQ6ehp/hSdGMVDfiTMCxWaqn83NA6knASumcyiynkvE5d6mSKkL5eansyuzUVT/qWjWRgnioZklsxPOFIRytpDHhOUKD4xBBPBzOcQGWGBiTIdl0xt9nxJi6R9Vretuv1wXm1c5wUWwRE4BjVggwvQAHegCVqAAA1ewCt4g+/wA37B79noCsx3DsA/wJ9f0yuo7A==AAACMHicbVDLSgMxFE2sj1pfreLKTbAIdVNmRNBlfSwENxXsA9qxZDKZNjTzIMkIJczHuNWdX6MrcetXmGlnoW0PBA7n3Ms9OW7MmVSW9QlXCqtr6xvFzdLW9s7uXrmy35ZRIghtkYhHoutiSTkLaUsxxWk3FhQHLqcdd3yT+Z1nKiSLwkc1iakT4GHIfEawMtKgfHg70P0AqxHBXF+l6ZOu3Z+mg3LVqltToEVi56QKcjQHFVjoexFJAhoqwrGUPduKlaOxUIxwmpb6iaQxJmM8pD1DQxxQ6ehp/hSdGMVDfiTMCxWaqn83NA6knASumcyiynkvE5d6mSKkL5eansyuzUVT/qWjWRgnioZklsxPOFIRytpDHhOUKD4xBBPBzOcQGWGBiTIdl0xt9nxJi6R9Vretuv1wXm1c5wUWwRE4BjVggwvQAHegCVqAAA1ewCt4g+/wA37B79noCsx3DsA/wJ9f0yuo7A==AAACMHicbVDLSgMxFE2sj1pfreLKTbAIdVNmRNBlfSwENxXsA9qxZDKZNjTzIMkIJczHuNWdX6MrcetXmGlnoW0PBA7n3Ms9OW7MmVSW9QlXCqtr6xvFzdLW9s7uXrmy35ZRIghtkYhHoutiSTkLaUsxxWk3FhQHLqcdd3yT+Z1nKiSLwkc1iakT4GHIfEawMtKgfHg70P0AqxHBXF+l6ZOu3Z+mg3LVqltToEVi56QKcjQHFVjoexFJAhoqwrGUPduKlaOxUIxwmpb6iaQxJmM8pD1DQxxQ6ehp/hSdGMVDfiTMCxWaqn83NA6knASumcyiynkvE5d6mSKkL5eansyuzUVT/qWjWRgnioZklsxPOFIRytpDHhOUKD4xBBPBzOcQGWGBiTIdl0xt9nxJi6R9Vretuv1wXm1c5wUWwRE4BjVggwvQAHegCVqAAA1ewCt4g+/wA37B79noCsx3DsA/wJ9f0yuo7A==(cid:106)(cid:15)(cid:1)(cid:15)(cid:1)(cid:15)(cid:15)(cid:1)(cid:15)(cid:1)(cid:15)(cid:15)(cid:1)(cid:15)(cid:1)(cid:15)(cid:66)(cid:81)(cid:81)(cid:77)(cid:90)(cid:70)(cid:87)(cid:66)(cid:77)(cid:86)(cid:66)(cid:85)(cid:70)\u21e5TAAACInicbVBNSwMxFHypVWv9avXoJVgET2W3CnosevFYoV/QLiWbzbah2eySZIVS+ie86tFf4008Cf4Ys+0etHUgMMy8x5uMnwiujeN8ocJWcXtnt7RX3j84PDquVE+6Ok4VZR0ai1j1faKZ4JJ1DDeC9RPFSOQL1vOn95nfe2JK81i2zSxhXkTGkoecEmOl/tDwiGncHlVqTt1ZAm8SNyc1yNEaVVFxGMQ0jZg0VBCtB66TGG9OlOFUsEV5mGqWEDolYzawVBJ7x5svAy/whVUCHMbKPmnwUv29MSeR1rPIt5MRMRO97mXiv16mKB1qa+JNN9DZubVsJrz15lwmqWGSrqKFqcAmxllfOOCKUSNmlhCquP0dphOiCDW21bLtzV1vaZN0G3X3qt54vK417/IGS3AG53AJLtxAEx6gBR2gIOAZXuAVvaF39IE+V6MFlO+cwh+g7x8U/6NZ(cid:84)(cid:70)(cid:77)(cid:70)(cid:68)(cid:85)(cid:106)M(\u2713)AAACLnicbVDLSsNAFJ20Pmp9tXbpJliEuimJCLosunEjVLAPaEKZTCbt0MkkzNwIIfRb3OrOrxFciFs/w0mbhbY9MHA4517umePFnCmwrE+jVN7a3tmt7FX3Dw6Pjmv1k76KEkloj0Q8kkMPK8qZoD1gwOkwlhSHHqcDb3aX+4NnKhWLxBOkMXVDPBEsYASDlsa1hhNimBLMs4d5y4EpBXwxrjWttrWAuU7sgjRRge64bpQdPyJJSAUQjpUa2VYMboYlMMLpvOokisaYzPCEjjQVOKTKzRbp5+a5VnwziKR+AsyF+ncjw6FSaejpyTyrWvVycaOXK1IFaqPpq/zaSjQIbtyMiTgBKsgyWZBwEyIz7870maQEeKoJJpLpz5lkiiUmoBuu6trs1ZLWSf+ybVtt+/Gq2bktCqygU3SGWshG16iD7lEX9RBBKXpBr+jNeDc+jC/jezlaMoqdBvoH4+cXNOGoGg==AAACLnicbVDLSsNAFJ20Pmp9tXbpJliEuimJCLosunEjVLAPaEKZTCbt0MkkzNwIIfRb3OrOrxFciFs/w0mbhbY9MHA4517umePFnCmwrE+jVN7a3tmt7FX3Dw6Pjmv1k76KEkloj0Q8kkMPK8qZoD1gwOkwlhSHHqcDb3aX+4NnKhWLxBOkMXVDPBEsYASDlsa1hhNimBLMs4d5y4EpBXwxrjWttrWAuU7sgjRRge64bpQdPyJJSAUQjpUa2VYMboYlMMLpvOokisaYzPCEjjQVOKTKzRbp5+a5VnwziKR+AsyF+ncjw6FSaejpyTyrWvVycaOXK1IFaqPpq/zaSjQIbtyMiTgBKsgyWZBwEyIz7870maQEeKoJJpLpz5lkiiUmoBuu6trs1ZLWSf+ybVtt+/Gq2bktCqygU3SGWshG16iD7lEX9RBBKXpBr+jNeDc+jC/jezlaMoqdBvoH4+cXNOGoGg==AAACLnicbVDLSsNAFJ20Pmp9tXbpJliEuimJCLosunEjVLAPaEKZTCbt0MkkzNwIIfRb3OrOrxFciFs/w0mbhbY9MHA4517umePFnCmwrE+jVN7a3tmt7FX3Dw6Pjmv1k76KEkloj0Q8kkMPK8qZoD1gwOkwlhSHHqcDb3aX+4NnKhWLxBOkMXVDPBEsYASDlsa1hhNimBLMs4d5y4EpBXwxrjWttrWAuU7sgjRRge64bpQdPyJJSAUQjpUa2VYMboYlMMLpvOokisaYzPCEjjQVOKTKzRbp5+a5VnwziKR+AsyF+ncjw6FSaejpyTyrWvVycaOXK1IFaqPpq/zaSjQIbtyMiTgBKsgyWZBwEyIz7870maQEeKoJJpLpz5lkiiUmoBuu6trs1ZLWSf+ybVtt+/Gq2bktCqygU3SGWshG16iD7lEX9RBBKXpBr+jNeDc+jC/jezlaMoqdBvoH4+cXNOGoGg==AAACLnicbVDLSsNAFJ20Pmp9tXbpJliEuimJCLosunEjVLAPaEKZTCbt0MkkzNwIIfRb3OrOrxFciFs/w0mbhbY9MHA4517umePFnCmwrE+jVN7a3tmt7FX3Dw6Pjmv1k76KEkloj0Q8kkMPK8qZoD1gwOkwlhSHHqcDb3aX+4NnKhWLxBOkMXVDPBEsYASDlsa1hhNimBLMs4d5y4EpBXwxrjWttrWAuU7sgjRRge64bpQdPyJJSAUQjpUa2VYMboYlMMLpvOokisaYzPCEjjQVOKTKzRbp5+a5VnwziKR+AsyF+ncjw6FSaejpyTyrWvVycaOXK1IFaqPpq/zaSjQIbtyMiTgBKsgyWZBwEyIz7870maQEeKoJJpLpz5lkiiUmoBuu6trs1ZLWSf+ybVtt+/Gq2bktCqygU3SGWshG16iD7lEX9RBBKXpBr+jNeDc+jC/jezlaMoqdBvoH4+cXNOGoGg==(cid:84)(cid:66)(cid:78)(cid:81)(cid:77)(cid:70)(cid:85)(cid:83)(cid:66)(cid:74)(cid:79)(cid:70)(cid:87)(cid:66)(cid:77)(cid:86)(cid:66)(cid:85)(cid:70)\u21e5TAAACInicbVBNSwMxFHypVWv9avXoJVgET2W3CnosevFYoV/QLiWbzbah2eySZIVS+ie86tFf4008Cf4Ys+0etHUgMMy8x5uMnwiujeN8ocJWcXtnt7RX3j84PDquVE+6Ok4VZR0ai1j1faKZ4JJ1DDeC9RPFSOQL1vOn95nfe2JK81i2zSxhXkTGkoecEmOl/tDwiGncHlVqTt1ZAm8SNyc1yNEaVVFxGMQ0jZg0VBCtB66TGG9OlOFUsEV5mGqWEDolYzawVBJ7x5svAy/whVUCHMbKPmnwUv29MSeR1rPIt5MRMRO97mXiv16mKB1qa+JNN9DZubVsJrz15lwmqWGSrqKFqcAmxllfOOCKUSNmlhCquP0dphOiCDW21bLtzV1vaZN0G3X3qt54vK417/IGS3AG53AJLtxAEx6gBR2gIOAZXuAVvaF39IE+V6MFlO+cwh+g7x8U/6NZ(cid:84)(cid:70)(cid:77)(cid:70)(cid:68)(cid:85)(cid:106)T\u21e4AAACK3icbVBPS8MwHE3m1Dn/bNOjl+AQxMNop6DHoRePE/YPtlLSNN3C0rQkqTBKP4lXPfppPCle/R6mWw+6+SDweO/34/fyvJgzpS3rA5a2yts7u5W96v7B4VGt3jgeqCiRhPZJxCM58rCinAna10xzOoolxaHH6dCb3+f+8IlKxSLR04uYOiGeChYwgrWR3HptEmI9I5invcxNLzO33rRa1hJok9gFaYICXbcByxM/IklIhSYcKzW2rVg7KZaaEU6z6iRRNMZkjqd0bKjAIVVOukyeoXOj+CiIpHlCo6X6eyPFoVKL0DOTeU617uXiv16uSBUoY6JN11f5ubVsOrh1UibiRFNBVtGChCMdobw45DNJieYLQzCRzPwOkRmWmGhTb9X0Zq+3tEkG7ZZ91Wo/Xjc7d0WDFXAKzsAFsMEN6IAH0AV9QEACnsELeIVv8B1+wq/VaAkWOyfgD+D3D/i0puE=T\u21e4(Dtrain)AAACP3icbVDNSsNAGNxo1Vr/Wj16WSxC9VCSKuixqAePFfoHbQibzaZdutmE3Y1QQh7Ap/GqRx/DJ/AmXr25SXPQ1oGFYeb72PnGjRiVyjTfjbX10sbmVnm7srO7t39QrR32ZRgLTHo4ZKEYukgSRjnpKaoYGUaCoMBlZODObjN/8EiEpCHvqnlE7ABNOPUpRkpLTrU+DpCaYsSSbuok52njzklySfqJEojyND3TU2bTzAFXiVWQOijQcWpGaeyFOA4IV5ghKUeWGSk7QUJRzEhaGceSRAjP0ISMNOUoINJO8mtSeKoVD/qh0I8rmKu/NxIUSDkPXD2ZB132MvFfL1OE9KU24arryey7pWzKv7YTyqNYEY4X0fyYQRXCrEzoUUGwYnNNEBZUXwfxFAmEla68onuzlltaJf1W07poth4u6+2bosEyOAYnoAEscAXa4B50QA9g8ASewQt4Nd6MD+PT+FqMrhnFzhH4A+P7Byi1r4o=T(1)\u21e4AAACMXicbVDLSgMxFE20aq2vVnHlJliE6qLMVEGXRTcuK/QF7Thk0kwbmnmQZIQS5mPc6tKv6U7c+hNm2llo64HA4Zx7uSfHizmTyrLmcGOzsLW9U9wt7e0fHB6VK8ddGSWC0A6JeCT6HpaUs5B2FFOc9mNBceBx2vOmD5nfe6FCsihsq1lMnQCPQ+YzgpWR3PLpMMBqQjDX7dTVV+mzrtmXqVuuWnVrAbRO7JxUQY6WW4GF4SgiSUBDRTiWcmBbsXI0FooRTtPSMJE0xmSKx3RgaIgDKh29yJ+iC6OMkB8J80KFFurvDY0DKWeBZyaztHLVy8R/vUwR0pfGROvuSGbnVrIp/87RLIwTRUOyjOYnHKkIZfWhEROUKD4zBBPBzO8QmWCBiTIll0xv9mpL66TbqNvX9cbTTbV5nzdYBGfgHNSADW5BEzyCFugAAjR4BW/gHX7AOfyEX8vRDZjvnIA/gN8/K0io9Q==T(K)\u21e4AAACMXicbVDLSgMxFE1q1VpfreLKTbAI1UWZqYIui24ENxX6gnYcMplMG5p5kGSEMszHuNWlX9OduPUnzLSz0NYDgcM593JPjhNxJpVhzGFho7i5tV3aKe/u7R8cVqpHPRnGgtAuCXkoBg6WlLOAdhVTnA4iQbHvcNp3pveZ33+hQrIw6KhZRC0fjwPmMYKVluzKycjHakIwTzqpnVymz0n98SK1KzWjYSyA1omZkxrI0barsDhyQxL7NFCEYymHphEpK8FCMcJpWh7FkkaYTPGYDjUNsE+llSzyp+hcKy7yQqFfoNBC/b2RYF/Kme/oySytXPUy8V8vU4T0pDbRuuvK7NxKNuXdWgkLoljRgCyjeTFHKkRZfchlghLFZ5pgIpj+HSITLDBRuuSy7s1cbWmd9JoN86rRfLqute7yBkvgFJyBOjDBDWiBB9AGXUBAAl7BG3iHH3AOP+HXcrQA851j8Afw+wdYEqkPBAAACJXicbVBNSwMxFHypVWv9avXoJVgET2W3Cnos9eKxgv2AdinZNNuGZrNLkhXK0p/hVY/+Gm8iePKvmG33oK0DgWHmPd5k/FhwbRznCxW2its7u6W98v7B4dFxpXrS1VGiKOvQSESq7xPNBJesY7gRrB8rRkJfsJ4/u8v83hNTmkfy0cxj5oVkInnAKTFWGgxDYqaUiLS1GFVqTt1ZAm8SNyc1yNEeVVFxOI5oEjJpqCBaD1wnNl5KlOFUsEV5mGgWEzojEzawVJKQaS9dZl7gC6uMcRAp+6TBS/X3RkpCreehbyezjHrdy8R/vUxROtDWxJvuWGfn1rKZ4NZLuYwTwyRdRQsSgU2Es8rwmCtGjZhbQqji9neYToki1Nhiy7Y3d72lTdJt1N2reuPhutZs5Q2W4AzO4RJcuIEm3EMbOkAhgmd4gVf0ht7RB/pcjRZQvnMKf4C+fwAegaT1BAAACJXicbVBNSwMxFHypVWv9avXoJVgET2W3Cnos9eKxgv2AdinZNNuGZrNLkhXK0p/hVY/+Gm8iePKvmG33oK0DgWHmPd5k/FhwbRznCxW2its7u6W98v7B4dFxpXrS1VGiKOvQSESq7xPNBJesY7gRrB8rRkJfsJ4/u8v83hNTmkfy0cxj5oVkInnAKTFWGgxDYqaUiLS1GFVqTt1ZAm8SNyc1yNEeVVFxOI5oEjJpqCBaD1wnNl5KlOFUsEV5mGgWEzojEzawVJKQaS9dZl7gC6uMcRAp+6TBS/X3RkpCreehbyezjHrdy8R/vUxROtDWxJvuWGfn1rKZ4NZLuYwTwyRdRQsSgU2Es8rwmCtGjZhbQqji9neYToki1Nhiy7Y3d72lTdJt1N2reuPhutZs5Q2W4AzO4RJcuIEm3EMbOkAhgmd4gVf0ht7RB/pcjRZQvnMKf4C+fwAegaT1NAAACG3icbVBNSwMxFEy0aq1frR69BIvgqexWQY9FL56kBfsB7VKy6ds2NJtdkqxQlv4Cr3r013gTrx78N2bbPWjrQGCYeY83GT8WXBvH+cYbm4Wt7Z3ibmlv/+DwqFw57ugoUQzaLBKR6vlUg+AS2oYbAb1YAQ19AV1/epf53SdQmkfy0cxi8EI6ljzgjBortR6G5apTcxYg68TNSRXlaA4ruDAYRSwJQRomqNZ914mNl1JlOBMwLw0SDTFlUzqGvqWShqC9dJF0Ts6tMiJBpOyThizU3xspDbWehb6dDKmZ6FUvE//1MkXpQFuTrLsjnZ1byWaCGy/lMk4MSLaMFiSCmIhkRZERV8CMmFlCmeL2d4RNqKLM2DpLtjd3taV10qnX3MtavXVVbdzmDRbRKTpDF8hF16iB7lETtRFDgJ7RC3rFb/gdf+DP5egGzndO0B/grx9+bKBvNAAACG3icbVBNSwMxFEy0aq1frR69BIvgqexWQY9FL56kBfsB7VKy6ds2NJtdkqxQlv4Cr3r013gTrx78N2bbPWjrQGCYeY83GT8WXBvH+cYbm4Wt7Z3ibmlv/+DwqFw57ugoUQzaLBKR6vlUg+AS2oYbAb1YAQ19AV1/epf53SdQmkfy0cxi8EI6ljzgjBortR6G5apTcxYg68TNSRXlaA4ruDAYRSwJQRomqNZ914mNl1JlOBMwLw0SDTFlUzqGvqWShqC9dJF0Ts6tMiJBpOyThizU3xspDbWehb6dDKmZ6FUvE//1MkXpQFuTrLsjnZ1byWaCGy/lMk4MSLaMFiSCmIhkRZERV8CMmFlCmeL2d4RNqKLM2DpLtjd3taV10qnX3MtavXVVbdzmDRbRKTpDF8hF16iB7lETtRFDgJ7RC3rFb/gdf+DP5egGzndO0B/grx9+bKBv\fand D(k)\nA\n\ntrain, . . . , D(K)\n\ntrain consists of two datasets D(k)\nM\n\nTo achieve (3), we propose an ef\ufb01cient strategy for augmentation policy search (see Figure 2). First,\nwe conduct the K-fold strati\ufb01ed shuf\ufb02ing [31] to split the train dataset into D(1)\ntrain where\neach D(k)\n. As a matter of convenience, we omit k in the\nnotation of datasets in the remaining parts. Next, we train model parameter \u03b8 on DM from scratch\nwithout data augmentation. Contrary to previous methods [5, 15], our method does not necessarily\nreduce the given network to child models or proxy tasks.\nAfter training the model parameter, for each step 1 \u2264 t \u2264 T , we explore B candidate policies\nB = {T1, . . . ,TB} via Bayesian optimization method which repeatedly samples a sequence of\nsub-policies from search space S to construct a policy T = {\u03c41, . . . , \u03c4NT } and tunes corresponding\ncalling probabilities {p1, . . . , pNT } and magnitudes {\u03bb1, . . . , \u03bbNT } to minimize the expected loss\nL(\u03b8|\u00b7) on augmented dataset T (DA) (see line 6 in Algorithm 1). Note that, during the policy\nexploration-and-exploitation procedure, the proposed algorithm does not train model parameter\nfrom scratch again, hence the proposed method \ufb01nd augmentation policies signi\ufb01cantly faster than\nAutoAugment. The concrete Bayesian optimization method is explained in Section 3.2.2.\nAs the algorithm completes the exploration step, we select top-N policies over B and denote them Tt\ncollectively. Finally, we merge every Tt into T\u2217. See Algorithm 1 for the overall procedure. At the\nend of the process, we augment the whole dataset Dtrain with T\u2217 and retrain the model parameter \u03b8.\nThrough the proposed method, we can expect the performance R(\u03b8|\u00b7) on augmented dataset T\u2217(DA)\nis statistically higher than that on DA:\n\nsince augmentation policy T\u2217 works as optimized inference-time augmentation [33, 34] to make the\nmodel robustly predict correct answers. Consequently, learned augmentation policies approach (3)\nand improve generalization performance as we desired.\n\nR(\u03b8|T\u2217(DA)) \u2265 R(\u03b8|DA)\n\n3.2.2 Policy Exploration via Bayesian Optimization\n\n(cid:90)\n\nEI(T ) = E(cid:2)min(L(\u03b8|T (DA)) \u2212 L\u2020, 0)(cid:3) =\n\nPolicy exploration is an essential ingredient in the process of automated augmentation search. Since\nthe evaluation of the model performance for every candidate policies is computationally expensive,\nwe apply Bayesian optimization to the exploration of augmentation strategies. Precisely, at the line 6\nin Algorithm 1, we employ the following Expected Improvement (EI) criterion [18] for acquisition\nfunction to explore candidate policies B ef\ufb01ciently:\n\nmin(L \u2212 L\u2020, 0)P\u03b8,DA (L|T )dL\n\n(4)\nHere the expectation in (4) is taken over the density function P\u03b8,DA on the codomain of value of\nthe loss function L(\u03b8|T (DA)) which measures statistical potential of unexplored augmented data\n(\u03c4 (x), y) \u2208 T (DA) to approximate (3) for given pre-trained model M(\u00b7|\u03b8). Recall that T consists\nof sub-policies \u03c41, . . . , \u03c4NT and corresponding parameters {p1, . . . , pNT } and {\u03bb1 . . . , \u03bbNT } hence\nthe density function P\u03b8,DA (L|T ) is actually determined by these parameters. L\u2020 in (4) denotes\nthe constant threshold of loss value determined by the quantile of observations among previously\nexplored policies. We employ variable kernel density estimation [35] on graph-structured search\nspace S to estimate the density function P\u03b8,DA(L|T ) and eventually approximate the criterion (4).\nPractically, since the optimization method is already proposed in tree-structured Parzen estimator\n(TPE) algorithm [2], we apply their HyperOpt library for the parallelized implementation.\n\n3.3\n\nImplementation\n\nFast AutoAugment searches desired augmentation policies applying aforementioned Bayesian op-\ntimization to distributed train splits. In other words, the overall search process consists of two\nsteps, (1) training model parameters on K-fold train data with default augmentation rules and (2)\nexploration-and-exploitation using HyperOpt to search the optimal augmentation policies. In the\nbelow, we describe the practical implementation of the overall steps in Algorithm 1. The following\nprocedures are mostly parallelizable, which makes the proposed method more ef\ufb01cient to be used in\nactual usage. We utilize Ray [24] to implement Fast AutoAugment, which enables us to train models\nand search policies in a distributed manner.\nShuf\ufb02e (Line 1): We split training sets while preserving the percentage of samples for each class\n(strati\ufb01ed shuf\ufb02ing) using StratifiedShuffleSplit method in sklearn [27].\n\n5\n\n\fAlgorithm 1: Fast AutoAugment\nInput\n\n:(\u03b8, Dtrain, K, T, B, N )\n\n1 Split Dtrain into K-fold data D(k)\n2 for k \u2208 {1, . . . , K} do\n\nM , D(k)\ntrain = {(D(k)\nA )}\nM , D(k)\nA )\n\n3\n4\n5\n6\n7\n\n8\n\nT (k)\n\u2217 \u2190 \u2205, (DM, DA) \u2190 (D(k)\nTrain \u03b8 on DM\nfor t \u2208 {0, . . . , T \u2212 1} do\n\nB \u2190 BayesOptim(T ,L(\u03b8|T (DA)), B)\nTt \u2190 Select top-N policies in B\n\u2217 \u2190 T (k)\nT (k)\n\u2217 \u222a Tt\nk T (k)\n\u2217\n\n9 return T\u2217 =(cid:83)\n\n// stratified shuffling\n\n// initialize\n\n// explore-and-exploit\n\n// merge augmentation policies\n\nTrain (Line 4): Train models on each training split. We implement this to run parallelly across\nmultiple machines to reduce total running time if the computational resource is enough.\nExplore-and-Exploit (Line 6): We use HyperOpt library from Ray with B search numbers and 20\nmaximum concurrent evaluations. Different from AutoAugment, we do not discretize search spaces\nsince our search algorithm can handle continuous values. We explore one of the possible operations\nwith probability p and magnitude \u03bb. The values of probability and magnitude are uniformly sampled\nfrom [0, 1] at the beginning, then HyperOpt modulates the values to optimize the objective L.\nMerge (Line 7-9): Select the top N best policies for each split and then combine the obtained policies\nfrom all splits. This set of \ufb01nal policies is used for re-train.\n\n4 Experiments and Results\n\nIn this section, we examine the performance of Fast AutoAugment (FAA) on the CIFAR-10, CIFAR-\n100 [20], and ImageNet [6] datasets and compare the results with baseline preprocessing, Cutout [7],\nAutoAugment (AA) [5], and PBA [15]. For ImageNet, we only compare the baseline, AA, and FAA\nsince PBA does not conduct experiments on ImageNet. We follow the experimental setting of AA for\nfair comparison, except that an evaluation of the proposed method on AmoebaNet-B model [28] is\nomitted. As in AA, each sub-policy consists of two operations (N\u03c4 = 2), each policy consists of \ufb01ve\nsub-policies (NT = 5), and the search space consists of the same 16 operations (ShearX, ShearY,\nTranslateX, TranslateY, Rotate, AutoContrast, Invert, Equalize, Solarize, Posterize, Contrast, Color,\nBrightness, Sharpness, Cutout, Sample Pairing).\nInterestingly, FAA is able to select Cutout in\nsearched policies. We conjecture that Cutout can probably eliminate irrelevant backgrounds and\nimprove the classi\ufb01cation accuracy when the inference is performed on a well-trained network. We\nutilize 5-folds strati\ufb01ed shuf\ufb02ing (K = 5), 2 search width (T = 2), 200 search depth (B = 200), and\n10 selected policies (N = 10) for policy evaluation. Due to the ef\ufb01ciency in the proposed search\nprocess, FAA can \ufb01nd more numbers of optimized augmentation policies, almost regardless of its\nnumber. Therefore, we can consider the number of sub-policies as a hyperparameter to tune.\nWhen we use a multi-threading functionality for data augmentation, we observe that there is no actual\nextension of training time by augmentation in comparison to the baseline without augmentation.\nMoreover, even when we perform both the data augmentation and weight updating by SGD in a\nsingle thread as a sequential processing, the increased training time that we observe is only 10-20%\nover 200 epochs; in total, less than 5 hours on CIFAR-10/100 with WResNet28x10 and a single V100\nGPU. Hence the training time overhead by increased number of sub-policies is also limited. Having\nthis in mind, we performed FAA with different numbers of sub-policies and determined the number\nof sub-policies that produces the best average performances across different datasets and networks.\nHowever, as shown in Figure 3, the performances obtained by 25 numbers of sub-policies are also\ncomparable to those by more numbers of sub-policies. We increase the batch size and adapt the\nlearning rate accordingly to boost the training [38]. Otherwise, we set other hyperparameters equal to\nAA if possible. For the unknown hyperparameters, we follow values from the original references or\nwe tune them to match baseline performances.\n\n6\n\n\fModel\n\nBaseline Cutout [7] AA [5]\n\nPBA [15]\n\nWide-ResNet-40-2\nWide-ResNet-28-10\nShake-Shake(26 2\u00d732d)\nShake-Shake(26 2\u00d796d)\nShake-Shake(26 2\u00d7112d)\nPyramidNet+ShakeDrop\n\n5.3\n3.9\n3.6\n2.9\n2.8\n2.7\n\n4.1\n3.1\n3.0\n2.6\n2.6\n2.3\n\n3.7\n2.6\n2.5\n2.0\n1.9\n1.5\n\n\u2212\n2.6\n2.5\n2.0\n2.0\n1.5\n\nTable 2: Test set error rate (%) on CIFAR-10.\n\nModel\n\nBaseline Cutout [7] AA [5]\n\nPBA [15]\n\nWide-ResNet-40-2\nWide-ResNet-28-10\nShake-Shake(26 2\u00d796d)\nPyramidNet+ShakeDrop\n\n26.0\n18.8\n17.1\n14.0\n\n25.2\n18.4\n16.0\n12.2\n\n20.7\n17.1\n14.3\n10.7\n\n\u2212\n16.7\n15.3\n10.9\n\nTable 3: Test set error rate (%) on CIFAR-100.\n\nFAA\n\n(transfer / direct)\n\n3.6 / 3.7\n2.7 / 2.7\n2.7 / 2.5\n2.0 / 2.0\n2.0 / 1.9\n1.8 / 1.7\n\nFAA\n\n(transfer / direct)\n20.7 / 20.6\n17.2 / 17.2\n14.9 / 14.6\n11.9 / 11.7\n\nModel\n\nBaseline Cutout [7] AA [5]\n\nPBA [15]\n\nWide-ResNet-28-10\n\n1.5\n\n1.3\n\n1.1\n\n1.2\n\nFAA\n1.1\n\nTable 4: Test set error rate (%) on SVHN.\n\nModel\n\nResNet-50\nResNet-200\n\nBaseline\n23.7 / 6.9\n21.5 / 5.8\n\nAA [5]\n22.4 / 6.2\n20.00 / 5.0\n\nFAA\n\n22.4 / 6.3\n19.4 / 4.7\n\nTable 5: Validation set Top-1 / Top-5 error rate (%) on ImageNet.\n\n4.1 CIFAR-10 and CIFAR-100\n\nFor both CIFAR-10 and CIFAR-100, we conduct two experiments using FAA: (1) direct search on\nthe full dataset given target network (2) transfer policies found by Wide-ResNet-40-2 on the reduced\nCIFAR-10 which consists of 4,000 randomly chosen examples. As shown in Table 2 and 3, overall,\nFAA signi\ufb01cantly improves the performances of the baseline and Cutout for any network while\nachieving comparable performances to those of AA.\n\nCIFAR-10 Results\nIn Table 2, we present the test set accuracies according to different models. We\nexamine Wide-ResNet-40-2, Wide-ResNet-28-10 [40], Shake-Shake [8], Shake-Drop [37] models to\nevaluate the test set accuracy of FAA. It is shown that, FAA achieves comparable results to AA and\nPBA on both experiments. We emphasize that it only takes 3.5 GPU-hours for the policy search on the\nreduced CIFAR-10. We also estimate the search time via full direct search. By considering the worst\ncase, Pyramid-Net+ShakeDrop requires 780 GPU-hours which is even less than the computation time\nof AA (5000 GPU-hours).\n\nCIFAR-100 Results Results are shown in Table 3. Again, FAA achieves signi\ufb01cantly better results\nthan baseline and cutout. However, except Wide-ResNet-40-2, FAA shows slightly worse results\nthan AA and PBA. Nevertheless, the search costs of the proposed method on CIFAR-100 are same\nas those on CIFAR-10. We conjecture the performance gaps between other methods and FAA are\nprobably caused by the insuf\ufb01cient policy search in the exploration procedure or the over-training of\nthe model parameters in the proposed algorithm.\n\n7\n\n\fFigure 3: Validation error (%) of Wide-ResNet-40-2 and Wide-ResNet-28-10 trained on CIFAR-10\nand CIFAR-100 as number of sub-policies used in training.\n\n4.2 SVHN\n\nWe conducted an experiment with the SVHN dataset [25] with the same settings in AA. We chose\n1,000 examples randomly and applied FAA to \ufb01nd augmentation policies. The obtained policies are\napplied to an initial model and we obtain the comparable performance to AA. Results are shown\nin Table 4 and Wide-ResNet-28-10 Model with the searched policies performs better than Baseline\nand Cutout and it is comparable with other methods. We emphasize that we use the same settings as\nCIFAR while AA tuned several hyperparameters on the validation dataset.\n\n4.3\n\nImageNet\n\nFollowing the experiment setting of AA, we use a reduced subset of the ImageNet train data which is\ncomposed of 6,000 samples from randomly selected 120 classes. ResNet-50 [12] on each fold were\ntrained for 90 epochs during policy search phase, and we trained ResNet-50 [12] and ResNet-200\n[13] with the searched augmentation policy. In Table 5, we compare the validation accuracies of\nFAA with those of baseline and of AA via ResNet-50 and ResNet-200. In this test, we except the\nAmoebaNet [28] since its exact implementation is not open to public. As one can see from the table,\nthe proposed method outperforms benchmarks. Furthermore, our search method is 33 times faster\nthan AA on the same experimental settings (see Table 1). Since extensive data augmentation protects\nthe network from over\ufb01tting [14], we believe the performance will be improved by reducing the\nweight decay which is tuned for the model with default augmentation rules.\n\n5 Discussion\n\nEffect of Number of Augmentation Policies Similar to AA, we hypothesize that as we increase\nthe number of sub-policies searched by FAA, the given neural network should show improved\ngeneralization performance. We investigate this hypothesis by testing trained models Wide-ResNet-\n40-2 and Wide-ResNet-28-10 on CIFAR-10 and CIFAR-100. We select sub-policy sets from a pool\nof 400 searched sub-policies, and train models again with each of these sub-policy sets. Figure 3\nshows the relation between average validation error and the number of sub-policies used in training.\nThis result veri\ufb01es that the performance improves with more sub-policies up to 100-125 sub-policies.\nAs one can observe in Table 2-3, there are small gaps between the performance of policies from direct\nsearch and the transferred policies from the reduced CIFAR-10 with Wide-ResNet-40-2. One can see\nthat those gaps increase as the model capacities increase since the searched augmentation policies by\nthe small model have a limitation to improve the generalization performance for the large model (e.g.,\nShake-Shake). Nevertheless, transferred policies are better than default augmentations; hence, one\ncan apply those policies to different image recognition tasks.\n\nComparison between Random Search Strategies We performed additional experiments with\ntwo random search strategies (1) Randomly pre-selected augmentations (RPSA), which \ufb01rst selects a\ncertain number (25/50) of augmentation policies randomly from the search space, and then trains\nWide-ResNet-28-10 using the selected augmentations over 200 epochs; (2) Random augmentations\n(RA), that independently samples an augmentation policy for each train input from the whole search\n\n8\n\nNumber of sub-policies2.02.53.03.54.04.5255075100125150175200225250WResNet40x2WResNet28x10Validation Error (CIFAR-10)Number of sub-policies1517.52022.525255075100125150175200225250WResNet40x2WResNet28x10Validation Error (CIFAR-100)\fspace during training with 400 epochs, which is two times more epochs than AA and FAA considering\nthe compensation for the search time of the both algorithms.\nBoth the RPSA and RA are performed on CIFAR-\n100 and repeated 20 times. As shown in the Fig-\nure 4, the performances of the RPSA is better\nthan baseline but not improved as the number\nof selected policies increases. And the best per-\nformance obtained by RPSA is still worse than\nFAA. In addition, the RA achieves a little bit\nworse result than those obtained by RPSA, and\nthe improvement by RA is also less than that by\nFAA. It is noted that even though we take into ac-\ncount the search time of the proposed method on\nCIFAR-10/100 (see Table 1), the training time for\nFAA with 200 epochs including the search time is\nshorter than the training time for the RA with 400\nepochs.\nRecently, the proposed FAA contributed to win the \ufb01rst place in AutoCV competition of NeurIPS\n2019 AutoDL challenge [1]. Especially, since this competition required an AutoML approach under\nvery limited computational resources and time, the (light version of) FAA [3] was only able to apply\nfor augmentation searching under this situation and eventually leaded to performance improvement.\nThe details of this result will be published in the near future.\n\nFigure 4: Comparison of test error (%) of Wide-\nResNet-28-10 trained on CIFAR-100 between\nrandom search strategies, AA, and FAA.\n\nSearch of Augmentation Policies per Class Taking advantage of the fact that the algorithm is\nef\ufb01cient, we experimented with searching for augmentation policies per class in CIFAR-100 with\nWide-ResNet-40-2 Model. We changed search depth B to 100, and kept other parameters the\nsame. With the 70 best-performing policies per class, we obtained a slightly improved error rate.\nAlthough it is dif\ufb01cult to see a de\ufb01nite improvement compared to AA and FAA, we believe that\nfurther optimization in this direction may improve performances more. Mainly, it is expected that the\neffect should be greater in the case of a dataset in which the difference between classes such as the\nobject scale is enormous.\nOne can try tuning the other meta-parameters of Bayesian optimization such as search depth or kernel\ntype in the TPE algorithm in the augmentation search phase. However, this does not signi\ufb01cantly\nhelp to improve model performance empirically.\n\n6 Conclusion\n\nWe propose an automatic process of learning augmentation policies for a given task and a con-\nvolutional neural network. Our search method is signi\ufb01cantly faster than AutoAugment, and its\nperformances overwhelm the human-crafted augmentation methods.\nOne can apply Fast AutoAugment to the advanced architectures such as AmoebaNet and consider\nvarious augmentation operations in the proposed search algorithm without increasing search costs.\nMoreover, the joint optimization of NAS and Fast AutoAugment is a a curious area in AutoML. We\nleave them for future works. We are also going to deal with the application of Fast AutoAugment to\nvarious computer vision tasks beyond image classi\ufb01cation in the near future.\n\nAcknowledgement We appreciate every reviewer for valuable comments. We are also grateful to\nBrain Cloud team at Kakao Brain for GPU support.\n\n9\n\n17.6017.4217.5017.117.1517.017.217.417.6RARPSA-25RPSA-50AAFAA\fReferences\n[1] NeurIPS 2019 AutoDL challenges. https://autodl.chalearn.org/.\n[2] J. S. Bergstra, R. Bardenet, Y. Bengio, and B. K\u00e9gl. Algorithms for hyper-parameter optimiza-\n\ntion. In Advances in Neural Information Processing Systems, pages 2546\u20132554, 2011.\n\n[3] K. Brain. AutoCLINT, automatic computationally light network transfer. https://github.\n\ncom/kakaobrain/autoclint, 2019.\n\n[4] L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. Deeplab: Semantic\nimage segmentation with deep convolutional nets, atrous convolution, and fully connected crfs.\nIEEE transactions on pattern analysis and machine intelligence, 40(4):834\u2013848, 2018.\n\n[5] E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le. Autoaugment: Learning\naugmentation strategies from data. In Proceedings of the IEEE conference on computer vision\nand pattern recognition, 2019.\n\n[6] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical\nimage database. In 2009 IEEE conference on computer vision and pattern recognition, pages\n248\u2013255. Ieee, 2009.\n\n[7] T. DeVries and G. W. Taylor. Improved regularization of convolutional neural networks with\n\ncutout. arXiv preprint arXiv:1708.04552, 2017.\n\n[8] X. Gastaldi. Shake-shake regularization. arXiv preprint arXiv:1705.07485, 2017.\n[9] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and\nY. Bengio. Generative adversarial nets. In Advances in neural information processing systems,\npages 2672\u20132680, 2014.\n\n[10] D. Han, J. Kim, and J. Kim. Deep pyramidal residual networks. In The IEEE Conference on\n\nComputer Vision and Pattern Recognition (CVPR), July 2017.\n\n[11] K. He, G. Gkioxari, P. Doll\u00e1r, and R. Girshick. Mask r-cnn. In Proceedings of the IEEE\n\ninternational conference on computer vision, pages 2961\u20132969, 2017.\n\n[12] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition.\n\nIn\nProceedings of the IEEE conference on computer vision and pattern recognition, pages 770\u2013\n778, 2016.\n\n[13] K. He, X. Zhang, S. Ren, and J. Sun. Identity mappings in deep residual networks. In European\n\nconference on computer vision, pages 630\u2013645. Springer, 2016.\n\n[14] A. Hern\u00e1ndez-Garc\u00eda and P. K\u00f6nig. Data augmentation instead of explicit regularization. arXiv\n\npreprint arXiv:1806.03852, 2018.\n\n[15] D. Ho, E. Liang, I. Stoica, P. Abbeel, and X. Chen. Population based augmentation: Ef\ufb01cient\n\nlearning of augmentation policy schedules. In ICML, 2019.\n\n[16] J. Hu, L. Shen, and G. Sun. Squeeze-and-excitation networks. In Proceedings of the IEEE\n\nconference on computer vision and pattern recognition, pages 7132\u20137141, 2018.\n\n[17] M. Jaderberg, V. Dalibard, S. Osindero, W. M. Czarnecki, J. Donahue, A. Razavi, O. Vinyals,\nT. Green, I. Dunning, K. Simonyan, et al. Population based training of neural networks. arXiv\npreprint arXiv:1711.09846, 2017.\n\n[18] D. R. Jones. A taxonomy of global optimization methods based on response surfaces. Journal\n\nof global optimization, 21(4):345\u2013383, 2001.\n\n[19] S. Kim, I. Kim, S. Lim, C. Kim, W. Baek, H. Cho, B. Yoon, and T. Kim. Scalable neural\n\narchitecture search for 3d medical image segmentation. 2018.\n\n[20] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technical\n\nreport, Citeseer, 2009.\n\n[21] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classi\ufb01cation with deep convolutional\nneural networks. In Advances in neural information processing systems, pages 1097\u20131105,\n2012.\n\n[22] J. Lemley, S. Bazrafkan, and P. Corcoran. Smart augmentation learning an optimal data\n\naugmentation strategy. IEEE Access, 5:5858\u20135869, 2017.\n\n[23] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.-Y. Fu, and A. C. Berg. Ssd: Single shot\nmultibox detector. In European conference on computer vision, pages 21\u201337. Springer, 2016.\n[24] P. Moritz, R. Nishihara, S. Wang, A. Tumanov, R. Liaw, E. Liang, M. Elibol, Z. Yang, W. Paul,\nM. I. Jordan, et al. Ray: A distributed framework for emerging {AI} applications. In 13th\n{USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 18), pages\n561\u2013577, 2018.\n[25] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng. Reading digits in natural\n\nimages with unsupervised feature learning. 2011.\n\n[26] M. Paschali, W. Simson, A. G. Roy, M. F. Naeem, R. G\u00f6bl, C. Wachinger, and N. Navab. Data\naugmentation with manifold exploring geometric transformations for increased performance\nand robustness. arXiv preprint arXiv:1901.04420, 2019.\n\n10\n\n\f[27] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel,\nP. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher,\nM. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine\nLearning Research, 12:2825\u20132830, 2011.\n\n[28] E. Real, A. Aggarwal, Y. Huang, and Q. V. Le. Regularized evolution for image classi\ufb01er\n\narchitecture search. arXiv preprint arXiv:1802.01548, 2018.\n\n[29] S. Ren, K. He, R. Girshick, and J. Sun. Faster r-cnn: Towards real-time object detection with\nregion proposal networks. In Advances in neural information processing systems, pages 91\u201399,\n2015.\n\n[30] I. Sato, H. Nishimura, and K. Yokoi. Apac: Augmented pattern classi\ufb01cation with neural\n\nnetworks. arXiv preprint arXiv:1505.03229, 2015.\n\n[31] M. Shahrokh Esfahani and E. R. Dougherty. Effect of separate sampling on classi\ufb01cation\n\naccuracy. Bioinformatics, 30(2):242\u2013250, 2013.\n\n[32] A. Shrivastava, T. P\ufb01ster, O. Tuzel, J. Susskind, W. Wang, and R. Webb. Learning from\nsimulated and unsupervised images through adversarial training. In Proceedings of the IEEE\nConference on Computer Vision and Pattern Recognition, pages 2107\u20132116, 2017.\n\n[33] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image\nrecognition. In 3rd International Conference on Learning Representations, ICLR 2015, San\nDiego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.\n\n[34] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. Rethinking the inception archi-\ntecture for computer vision. In Proceedings of the IEEE conference on computer vision and\npattern recognition, pages 2818\u20132826, 2016.\n\n[35] G. R. Terrell, D. W. Scott, et al. Variable kernel density estimation. The Annals of Statistics,\n\n[36] T. Tran, T. Pham, G. Carneiro, L. Palmer, and I. Reid. A bayesian data augmentation approach\nIn Advances in Neural Information Processing Systems, pages\n\n20(3):1236\u20131265, 1992.\n\nfor learning deep models.\n2797\u20132806, 2017.\n\n[37] Y. Yamada, M. Iwamura, T. Akiba, and K. Kise. Shakedrop regularization for deep residual\n\nlearning. arXiv preprint arXiv:1802.02375, 2018.\n\n[38] Y. You, I. Gitman, and B. Ginsburg. Large batch training of convolutional networks. arXiv\n\npreprint arXiv:1708.03888, 2017.\n\n[39] S. Yun, D. Han, S. J. Oh, S. Chun, J. Choe, and Y. Yoo. Cutmix: Regularization strategy to train\n\nstrong classi\ufb01ers with localizable features. arXiv preprint arXiv:1905.04899, 2019.\n\n[40] S. Zagoruyko and N. Komodakis. Wide residual networks. In British Machine Vision Conference\n\n2016. British Machine Vision Association, 2016.\n\n[41] H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz. mixup: Beyond empirical risk mini-\n\nmization. arXiv preprint arXiv:1710.09412, 2017.\n\n[42] B. Zoph, V. Vasudevan, J. Shlens, and Q. V. Le. Learning transferable architectures for scalable\nimage recognition. In Proceedings of the IEEE conference on computer vision and pattern\nrecognition, pages 8697\u20138710, 2018.\n\n11\n\n\f", "award": [], "sourceid": 3614, "authors": [{"given_name": "Sungbin", "family_name": "Lim", "institution": "Kakao Brain"}, {"given_name": "Ildoo", "family_name": "Kim", "institution": "Kakao Brain"}, {"given_name": "Taesup", "family_name": "Kim", "institution": "Mila / Kakao Brain"}, {"given_name": "Chiheon", "family_name": "Kim", "institution": "Kakao Brain"}, {"given_name": "Sungwoong", "family_name": "Kim", "institution": "Kakao Brain"}]}