{"title": "Provably robust boosted decision stumps and trees against adversarial attacks", "book": "Advances in Neural Information Processing Systems", "page_first": 13017, "page_last": 13028, "abstract": "The problem of adversarial robustness has been studied extensively for neural networks. However, for boosted decision trees and decision stumps there are almost no results, even though they are widely used in practice (e.g. XGBoost) due to their accuracy, interpretability, and efficiency. We show in this paper that for boosted decision stumps the \\textit{exact} min-max robust loss and test error for an $l_\\infty$-attack can be computed in $O(T\\log T)$ time per input, where $T$ is the number of decision stumps and the optimal update step of the ensemble can be done in $O(n^2\\,T\\log T)$, where $n$ is the number of data points. For boosted trees we show how to efficiently calculate and optimize an upper bound on the robust loss, which leads to state-of-the-art robust test error for boosted trees on MNIST (12.5\\% for $\\epsilon_\\infty=0.3$), FMNIST (23.2\\% for $\\epsilon_\\infty=0.1$), and CIFAR-10 (74.7\\% for $\\epsilon_\\infty=8/255$). Moreover, the robust test error rates we achieve are competitive to the ones of provably robust convolutional networks. The code of all our experiments is available at \\url{http://github.com/max-andr/provably-robust-boosting}.", "full_text": "Provably Robust Boosted Decision Stumps and Trees\n\nagainst Adversarial Attacks\n\nMaksym Andriushchenko\n\nUniversity of T\u00fcbingen\n\nmaksym.andriushchenko@uni-tuebingen.de\n\nMatthias Hein\n\nUniversity of T\u00fcbingen\n\nmatthias.hein@uni-tuebingen.de\n\nAbstract\n\nThe problem of adversarial robustness has been studied extensively for neural\nnetworks. However, for boosted decision trees and decision stumps there are almost\nno results, even though they are widely used in practice (e.g. XGBoost) due to their\naccuracy, interpretability, and ef\ufb01ciency. We show in this paper that for boosted\ndecision stumps the exact min-max robust loss and test error for an l1-attack can be\ncomputed in O(T log T ) time per input, where T is the number of decision stumps\nand the optimal update step of the ensemble can be done in O(n2 T log T ), where n\nis the number of data points. For boosted trees we show how to ef\ufb01ciently calculate\nand optimize an upper bound on the robust loss, which leads to state-of-the-art\nrobust test error for boosted trees on MNIST (12.5% for \u270f1 = 0.3), FMNIST\n(23.2% for \u270f1 = 0.1), and CIFAR-10 (74.7% for \u270f1 = 8/255). Moreover,\nthe robust test error rates we achieve are competitive to the ones of provably\nrobust convolutional networks. The code of all our experiments is available at\nhttp://github.com/max-andr/provably-robust-boosting.\n\n1\n\nIntroduction\n\nIt has recently been shown that deep neural networks are easily fooled by imperceptible perturba-\ntions called adversarial examples [62, 24] or tend to output high-con\ufb01dence predictions on out-of-\ndistribution inputs [51, 49, 29] that have nothing to do with the original classes. The most popular\ndefense against adversarial examples is adversarial training [24, 45], which is formulated as a robust\noptimization problem [59, 45]. However, the inner maximization problem is likely to be NP-hard\nfor neural networks as computing optimal adversarial examples is NP-hard [33, 71]. A large variety\nof sophisticated defenses proposed for neural networks [31, 7, 43] could be broken again via more\nsophisticated attacks [1, 18, 48]. Moreover, empirical robustness, evaluated by some attack, can also\narise from gradient masking or obfuscation [1] in which case gradient-free or black-box attacks often\nbreak heuristic defenses. A solution to this problem are methods that lead to provable robustness\nguarantees [28, 72, 54, 77, 68, 75, 13, 25] or lead to classi\ufb01ers which can be certi\ufb01ed via exact com-\nbinatorial solvers [63]. However, these solvers do not scale to large neural networks, and networks\nhaving robustness guarantees lack in terms of prediction performance compared to standard ones.\nThe only scalable certi\ufb01cation method is randomized smoothing [41, 42, 12, 57], however obtaining\ntight certi\ufb01cates for norms other than l2 is an open research question.\nWhile the adversarial problem has been studied extensively for neural networks, other classi\ufb01ers have\nreceived much less attention e.g. kernel machines [76, 56, 28], k-nearest neighbors [69], and decision\ntrees [52, 3, 9]. Boosting, in particular boosted decision trees, are very popular in practice due to\ntheir interpretability, competitive prediction performance, and ef\ufb01cient recent implementations such\nas XGBoost [10] and LightGBM [34]. Thus there is also a need to develop boosting methods which\nare robust to worst-case measurement error or adversarial changes of the input data. While robust\nboosting has been extensively considered in the literature [70, 44, 19], it refers in that context to a\n\n33rd Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, Canada.\n\n\fFigure 1: Left: boosted decision stumps: normal and our robust models. Right: boosted decision trees: normal\nand our robust models. In both cases, the normal models have very small geometric margin, while our robust\nmodels also classify all training points correctly but additionally enforce a large geometric margin.\n\nlarge functional margin or robustness with respect to outliers e.g. via using a robust loss function,\nbut not to the adversarial robustness we are considering in this paper. In the context of adversarial\nrobustness, very recently [9] considered the robust min-max loss for an ensemble of decision trees\nwith coordinate-aligned splits. They proposed an approximation of the inner maximization problem\nbut without any guarantees. The robustness guarantees were then obtained via a mixed-integer\nformulation of [32] for the computation of the minimal adversarial perturbation for tree ensembles.\nHowever, this approach has limited scalability to large problems.\n\nContributions\nIn this paper, we show how to exactly compute the robust loss and robust test error\nwith respect to l1-norm perturbations for an ensemble of decision stumps with coordinate-aligned\nsplits. This can be done ef\ufb01ciently in O(T log T ) time per data point, where T is the number of\ndecision stumps. Moreover, we show how to perform the globally optimal update of an ensemble of\ndecision stumps by directly minimizing the robust loss without any approximation in O(n2 T log T )\ntime per coordinate, where n is the number of training examples. We also derive a strict upper bound\non the robust loss for tree ensembles based on our results for an ensemble of decision stumps. It\ncan be ef\ufb01ciently evaluated in O(T l) time, where l is the number of leaves in the tree. Then we\nshow how this upper bound can be minimized during training in O(n2 l) time per coordinate. Our\nderived upper bound is quite tight empirically and leads to provable guarantees on the robustness of\nthe resulting tree ensemble. The difference of the resulting robust boosted decision stumps and trees\ncompared to normally trained models is visualized in Figure 1.\n\n2 Boosting and Robust Optimization for Adversarial Robustness\n\nIn this section we \ufb01x the notation, the framework of boosting, and de\ufb01ne brie\ufb02y the basis of robust\noptimization for adversarial robustness, underlying adversarial training. In the next sections we derive\nthe speci\ufb01c robust training procedure for an ensemble of decision stumps where we optimize the\nexact robust loss and for a tree ensemble where we optimize an upper bound.\n\nBoosting While the main ideas can be generalized to the multi-class setting (using one-vs-all, see\nAppendix E), for simplicity of the derivations we restrict ourselves to binary classi\ufb01cation, that is our\nlabels y are in {1, 1} and we assume to have d real-valued features. Boosting can be described as the\ntask of \ufb01tting an ensemble F : Rd ! R of weak learners ft : Rd ! R given as F (x) =PT\nt=1 ft(x).\nThe \ufb01nal classi\ufb01cation is done via the sign of F (x). In boosting the ensemble is \ufb01tted in a greedy way\nin the sense that given the already estimated ensemble we determine an update F 0 = F + fT +1, by\n\ufb01tting the new weak learner fT +1 being guided by the performance of the current ensemble F . In this\npaper we use in the experiments the exponential loss L : R ! R, where we use the functional margin\nformulation where for a point (x, y) 2 Rd \u21e5 {1, 1} it is de\ufb01ned as L(y f (x)) = exp(y f (x)).\nHowever, all following algorithms and derivations hold for any margin-based, strictly monotonically\ndecreasing, convex loss function L, e.g.\nlogistic loss L(y f (x)) = ln(1 + exp(yf (x))). The\nadvantage of the exponential loss is that it decouples F and the update fT +1 in the estimation process\nand allows us to see the estimation process for fT +1 as \ufb01tting a weighted exponential loss where the\nweights to \ufb01t (x, y) are given by exp(y F (x)),\n\nL(y F 0(x)) = exp yF (x) + fT +1(x) = exp y F (x) exp y fT +1(x).\n\nIn this paper we consider as weak learners: a) decision stumps (i.e. trees of depth one) of the form\nft,i : Rd ! R, ft,i(x) = wl + wr xib, where one does a coordinate-aligned split and b) decision\n\n2\n\n\ftrees (binary tree) of the form ft(x) = u(t)\nqt(x) : V ! R is a mapping from the set\nof leaves V of the tree to R and qt : Rd ! V is a mapping which assigns to every input the leaf\nof the tree it ends up. While the approach can be generalized to general linear splits of the form,\nwl +wr hv,xib, we concentrate on coordinate-aligned splits, wl +wr xib which are more common\nin practice since they lead to competitive performance and are easier to interpret for humans.\n\nqt(x), where u(t)\n\nRobust optimization for adversarial robustness Finding the minimal perturbation with respect\nto some lp-distance can be formulated as the following optimization problem:\nxi + 2 C\n\nyif (xi + ) \uf8ff 0,\n\nsuch that\n\n(1)\n\nmin\n\n2Rd kkp\n\nwhere (xi, yi) 2 Rd \u21e5 {1, 1} and C is a set of constraints every input has to ful\ufb01ll. In this paper\nwe assume C = [0, 1]d and that all features are normalized to be in this range. We emphasize that\nwe concentrate on continuous features, for adversarial perturbations of discrete features we refer\nto [53, 17, 36]. We denote by \u21e4i the optimal solution of this problem for (xi, yi). Furthermore, let\np(\u270f) := { 2 Rd |k kp \uf8ff \u270f} be the set of perturbations with respect to which we aim to be robust.\nnPn\nThen the robust test error with respect to p(\u270f) is de\ufb01ned for n data points as 1\ni=1 k\u21e4ikp\uf8ff\u270f.\nThe optimization problem (1) is non-convex for neural networks and can only be solved exactly via\nmixed-integer programming [63] which scales exponentially with the number of hidden neurons.\nSince such an evaluation is prohibitively expensive in most cases, often robustness is evaluated via\nheuristic attacks [47, 45, 8] which results in lower bounds on the robust test error. Provable robustness\naims at providing upper bounds on the robust test error and the optimization of these bounds during\ntraining [28, 72, 54, 77, 75, 13, 25, 12]. For an ensemble of trees the optimization problem (1) can\nalso be reformulated as a mixed-integer-program [32] which does not scale to large ensembles.\nThe goal of improving adversarial robustness can be formulated as a robust optimization problem\nwith respect to the set of allowed perturbations p(\u270f) [59, 45]:\n\nmin\n\n\u2713\n\nnXi=1\n\nmax\n2p(\u270f)\n\nLf (xi + ; \u2713), yi.\n\n(2)\n\nA training process, where one tries at each update step to approximately solve the inner maximization\nproblem, is called adversarial training [24]. We note that the maximization problem is in general non-\nconcave and thus globally optimal solutions are very dif\ufb01cult to obtain. Our goal in the following two\nsections is to get provable robustness guarantees for boosted stumps and trees by directly optimizing\n(2) or an upper bound on the inner maximization problem.\n\n3 Exact Robust Optimization for Boosted Decision Stumps\n\nWe \ufb01rst show how the exact robust loss max2p(\u270f) L(yi F (xi + ; \u2713)) can be computed for an\nensemble F of decision stumps. While decision stumps are very simple weak learners, they have\nbeen used in the original AdaBoost [20] and were successfully used in object detection [66] or face\ndetection [67] which could be done in real-time due to the simplicity of the classi\ufb01er.\n\n3.1 Exact Robust Test Error for Boosted Decision Stumps\n\nThe ensemble of decision stumps can be written as\n\nF (x) =\n\nft,ct(x) =\n\nTXt=1\n\nTXt=1\u21e3w(t)\n\nl + w(t)\nr\n\nxctbt\u2318,\n\nwhere ct is the coordinate for which ft makes a split. First, observe that a point x 2 Rd with label y\nis correctly classi\ufb01ed when yF (x) > 0. In order to determine whether the point x is adversarially\nrobust wrt l1-perturbations, one has to solve the following optimization problem:\n\nG(x, y) := min\nkk1\uf8ff\u270f\n\nyF (x + )\n\n(3)\n\n3\n\n\fIf G(x, y) \uf8ff 0, then the point x is non-robust. If G(x, y) > 0, then the point x is robust, i.e. it is\nnot possible to change the class. Thus the exact minimization of (3) over the test set yields the exact\nrobust test error. For many state-of-the-art classi\ufb01ers, this problem is NP-hard. For particular MIP\nformulations for tree ensembles, see [32], or for neural networks, see [63]. Closed-form solutions are\nknown only for the simplest models such as linear classi\ufb01ers [24].\nWe can solve this certi\ufb01cation problem for the robust test error exactly and ef\ufb01ciently by noting\nthat the objective and the attack model 1(\u270f) is separable wrt the input dimensions. Therefore,\nwe have to solve up to d simple one-dimensional optimization problems. We denote Sk = {s 2\n{1, . . . , T} | cs = k}, i.e. the set of stump indices that split coordinate k. Then\nyfs,k(x + )\nmin\nkk1\uf8ff\u270f\n\nyft,ct(x + ) = min\nkk1\uf8ff\u270f\n\nyF (x + ) = min\nkk1\uf8ff\u270f\n\n(4)\n\n=\n\nmin\n\nyw(s)\n\nl + min\n\nyfs,k(x + ) =\n\ndXk=1\u21e5Xs2Sk\n\n|k|\uf8ff\u270fXs2Sk\n\ndXk=1\nxk+kbs can be solved by simply\nThe one-dimensional optimization problem min\nchecking all |Sk| + 1 piece-wise constant regions of the classi\ufb01er for k 2 [\u270f, \u270f]. The detailed\nalgorithm can be found in Appendix B. The overall time complexity of the exact certi\ufb01cation is\nO(T log T ) since we need to sort up to T thresholds bs in ascending order to ef\ufb01ciently calculate all\npartial sums of the objective. Moreover, using this result, we can obtain provably minimal adversarial\nexamples (see Appendix B for details and Figure 11 for visualizations).\n\nxk+kbs\u21e4 :=\n\n|k|\uf8ff\u270fPs2Sk\n\ndXk=1\n\nGk(x, y)\n\nyw(s)\nr\n\nyw(s)\n\nr\n\ndXk=1Xs2Sk\n|k|\uf8ff\u270fXs2Sk\n\nTXt=1\n\n3.2 Exact Robust Loss Minimization for Boosted Decision Stumps\nWe note that when L is monotonically decreasing, it holds:\n\nmax\n\n21(\u270f)\n\nL(y F (x + )) = L\u21e3 min\n\n21(\u270f)\n\nyF (x + )\u2318,\n\nr\n\nmin\n\nyiw(s)\n\nyiw(s)\n\nl + yiwl + min\n\nyifs,k(xi + ) + min\n\nand thus the certi\ufb01cation algorithm can directly be used to compute also the robust loss. For updating\nthe ensemble F with a new stump f that splits a certain coordinate j, we \ufb01rst have to solve the inner\nmaximization problem over 1(\u270f) in (2) before1 we optimize the parameters wl, wr, b of f:\nmax\nkk1\uf8ff\u270f\n\nL\u21e3yiF (xi + ) + yifj(xi + )\u2318 = L\u21e3 min\n|k|\uf8ff\u270fXs2Sk\n|j|\uf8ff\u270f\u21e5Xs2Sj\nGk(xi, yi) + Xs2Sj\n\nkk1\uf8ff\u270f\u21e5 dXk=1Xs2Sk\nyifs,j(xi + ) + yifj(xi + )\u21e4\u2318\n|j|\uf8ff\u270f\u21e5Xs2Sj\n\nyifs,k(xi + ) + yifj(xi + )\u21e4\u2318\nxij +jbs + yiwr xij +jb\u21e4\u2318.\n\n= L\u21e3Xk6=j\n= L\u21e3Xk6=j\n\nxij +jbs, hr(xij, yi) := min\njbxij\n\nIn order to solve the remaining optimization problem for j we have to make a case distinction\nbased on the values of wr. However, \ufb01rst we de\ufb01ne the minimal values of the ensemble part on\nj 2 [\u270f, b xij) and j 2 [b xij,\u270f ] as\nhl(xij, yi) := min\n\nThese problems can be solved analogously to Gk(x, y). Then we get the case distinction:\ng(xij, yi; wr) = min\n\n|j|\uf8ff\u270f Xs2Sj\n|j|\uf8ff\u270f\u21e5Xs2Sj\n=\u21e2hr(xij, yi) + yiwr\nif b xij < \u270f or (|b xij|\uf8ff \u270f and hl(xij, yi) > hr(xij, yi) + yiwr)\nif b xij >\u270f\nor (|b xij|\uf8ff \u270f and hl(xij, yi) \uf8ff hr(xij, yi) + yiwr)\nThe following Lemma shows that the robust loss is jointly convex in wl, wrto see this set l = 2,\nu = (wl, wr)T , r(\u02c6x) = (yi, yi \u02c6xijb)T , C = B1(xi,\u270f ) and c =Pk6=j Gk(xi, yi).\n\nxij +jbs + yiwr xij +jb\u21e4\n\n1The order is very important as a min-max problem is not the same as a max-min problem.\n\n|j|\uf8ff\u270f Xs2Sj\n\nhl(xij, yi)\n\nxij +jbs\n\nj 0, then the point x is provably robust. However, if \u02dcG(x, y) \uf8ff 0, the point may be either\nrobust or non-robust. In this way, we get an upper bound on the number of non-robust points, which\nyields an upper bound on the robust test error. We note that for a decision tree, minkkp\uf8ff\u270f yu(t)\nqt(x+)\ncan be found exactly by checking all leafs which are reachable for points in Bp(x, \u270f). This can be\ndone in O(l) time per tree, where l is the number of leaves in the tree.\n\n4.2 Minimization of an Upper Bound on the Robust Loss for Boosted Decision Trees\n\nThe goal is to upper bound the inner maximization problem of Equation (2) based on the certi\ufb01cate\nthat we derived. Note that we aim to bound the loss of the whole ensemble F + f, and thus we do not\nuse any approximations of the loss such as the second-order Taylor expansion used in [23, 10]. We\nqt(x) be a \ufb01xed\n\nuse p = 1, that is the attack model is 1(\u270f). Let F (x) =PT\n\nt=1 ft(x) =PT\n\nt=1 u(t)\n\n5\n\n\fensemble of trees and f a new tree with which we update the ensemble. Then the robust optimization\nproblem is:\n\nmin\n\nf\n\nnXi=1\n\nmax\nkk1\uf8ff\u270f\n\nL\u21e3yiF (xi + ) + f (xi + )\u2318\n\nThe inner maximization problem can be upper bounded for every tree separately given that L(yf (x))\nis monotonically decreasing wrt yf (x), and using our certi\ufb01cate for the ensemble of T + 1 trees:\n\nL\u21e3yiF (xi + ) + yif (xi + )\u2318 = L\u21e3 min\n\nkk1\uf8ff\u270fh TXt=1\n\nyift(xi + ) + yif (xi + )i\u2318\nyif (xi + )\u2318\n\nkk1\uf8ff\u270f\n\nyif (xi + )\u2318 = L\u21e3 \u02dcG(xi, yi) + min\n\n(8)\n\nmin\nkk1\uf8ff\u270f\n\nyift(xi + ) + min\nkk1\uf8ff\u270f\n\nmax\nkk1\uf8ff\u270f\n\n\uf8ff L\u21e3 TXt=1\n\nWe can ef\ufb01ciently calculate \u02dcG(xi, yi) as described in the previous subsection. But note that\nminkk1\uf8ff\u270f yif (xi + ) depends on the tree f. The exact tree \ufb01tting is known to be NP-complete\n[39], although it is still possible to scale it to some moderate-sized problems with recent advances in\nMIP-solvers and hardware as shown in [2]. We want to keep the overall procedure scalable to large\ndatasets, so we will stick to the standard greedy recursive algorithm for \ufb01tting the tree. On every step\nof this process, we \ufb01t for some coordinate j 2{ 1, . . . , d} and for some splitting threshold b, a single\ndecision stump f (x) = wl + wr xjb. Therefore, for a particular decision stump with threshold b\nand coordinate j we have to solve the following problem:\n\nmin\n\nwl,wr2RXi2I\n\nL\u2713 \u02dcG(xi, yi) + yiwl + min\n\n|j|\uf8ff\u270f\n\nyiwr xij +jb\u25c6\n\nwhere I are all the points xi + which can reach this leaf for some with kk1 \uf8ff \u270f.\nFinally, we have to make a case distinction depending on the values of wr and b xij:\n\nyiwr xij +jb = yiwr \u00b7\u21e21\n\n0\n\nmin\n|j|\uf8ff\u270f\n\nif b xij < \u270f or (|b xij|\uf8ff \u270f and yiwr < 0)\nif b xij >\u270f\nor (|b xij|\uf8ff \u270f and yiwr 0)\n\nwhere we denote the case distinction for brevity as\n(xi, yi; wr). Note that the right side of (10) is\nconcave as a function of wr. Thus the overall robust optimization amounts to \ufb01nding the minimum of\nthe following objective, which is again by Lemma 1 jointly convex in wl, wr:\n\nL\u21e4(j, b) = min\n\nwl,wr Xi:i2I\n\nL\u21e3 \u02dcG(xi, yi) + yiwl + yiwr (xi, yi; wr)\u2318\n\nNote that the case distinction (xi, yi; wr) can be \ufb01xed once we \ufb01x the sign of wr. This allows us to\navoid doing bisection on wr, and rather use coordinate descent directly on each interval wr 0 and\nwr < 0. After \ufb01nding the minimum of the objective on each interval, we then combine the results\nfrom both intervals by taking the smallest loss out of them. The details are given in Appendix B.3.\nThen we select the optimal threshold as described in Section 3.2. Finally, as in other tree building\nmethods such as [5, 10], we perform pruning after a tree is constructed. We start from the leafs and\nprune nodes based on the upper bound on the training robust loss (8) to ensure that it decreases at\nevery iteration of tree boosting. This cannot be guaranteed with robust splits without pruning since\nthe tree construction process is greedy, and some training examples are also in\ufb02uenced by splits at\ndifferent branches. Thus, in order to control the upper bound on the robust loss globally over the\nwhole tree as in (8), and not just for the current subtree as in (9), we need a post-hoc approach that\ntakes into account the structure of the whole tree. Therefore, we have to use pruning. We note that\nin the extreme case, pruning may leave only one decision stump at the root (although it happens\nextremely rarely in practice), for which we are guaranteed to decrease the upper bound on the robust\nloss. Thus every new tree in the ensemble is guaranteed to reduce the upper bound on the robust loss.\nNote that this is also true if we use the shrinkage parameter [21] which we discuss in Appendix C.\nLastly, we note that the total worst case complexity is O(n2) in the number of training examples com-\npared to O(n log n) for XGBoost, which is a relatively low price given that the overall optimization\nproblem is signi\ufb01cantly more complicated than the formulation used in XGBoost.\n\n(7)\n\n(9)\n\n(10)\n\n(11)\n\n6\n\n\f5 Experiments\n\nGeneral setup We are primarily interested in two quantities: test error (TE) and robust test error\n(RTE) wrt l1-perturbations. For boosted stumps, we compute RTE as described in Section 3.1, but we\nalso report the upper bound on RTE (URTE) obtained using the stump-wise bound from Section 4.1\nto illustrate that it is actually tight for almost all models. For boosted trees, we report RTE obtained\nvia the MIP formulation of [32] which we adapted to a feasibility problem (see Appendix G.2 for\nmore details), and also the tree-wise upper bounds described in Section 4.1. For evaluation we use\n11 datasets: breast-cancer, diabetes, cod-rna, MNIST 1-5 (digit 1 vs digit 5), MNIST 2-6 (digit 2 vs\ndigit 6, following [32] and [9]), FMNIST shoes (sandals vs sneakers), GTS 100-rw (speed 100 vs\nroadworks), GTS 30-70 (speed 30 vs speed 70), MNIST, FMNIST, and CIFAR-10. More details\nabout the datasets are given in Appendix F. We emphasize that we evaluate our models on image\nrecognition datasets mainly for the sake of comparison to other methods reported in the literature.\nWe consider \ufb01ve types of boosted stumps: normally trained stumps, adversarially trained stumps (see\nAppendix G.1 for these results), robust stumps of Chen et al. [9], our robust stumps where the robust\nloss is bounded stump-wise, and our robust stumps where the robust loss is calculated exactly. Next\nwe consider four types of boosted trees: normally trained trees, adversarially trained trees, robust\ntrees of Chen et al. [9], and our robust trees where the robust loss is bounded tree-wise. Both for\nstumps and trees, we perform l1 adversarial training following [32], i.e. every iteration we train on\nclean training points and adversarial examples (equal proportion). We generate adversarial examples\nvia the cube attack \u2013 a simple attack inspired by random search [50] described in Appendix D (we\nuse 10 iterations and p = 0.5) and its performance is shown in Section G.3. We perform model\nselection of our models and models from Chen et al. [9] based on the validation set of 20% randomly\nselected points from the original training set, and we train on the rest of the training set. All models\nare trained with the exponential loss. More details about the experiments are available in Appendix F\nand in our repository http://github.com/max-andr/provably-robust-boosting.\n\nBoosted decision stumps The results for boosted stumps are given in Table 1. First, we observe\nthat normal models are not robust for the considered l1-perturbations. However, both variants of our\nrobust boosted stumps signi\ufb01cantly improve RTE, outperforming the method of Chen et al. [9] on 7\nout of the 8 datasets. Note that although our exact method optimizes the exact robust loss, we are\nstill not guaranteed to always outperform Chen et al. [9] since they use a different loss function, and\nthe quantities of interest are calculated on test data. The largest improvements compared to normal\nmodels are obtained on breast-cancer from 98.5% RTE to 10.9% and on MNIST 2-6 from 99.9% to\n9.1% RTE. The robust models perform slightly worse in terms of test error, which is in line with the\nempirical observation made for adversarial training for neural networks [64]. Additionally, to the\nrobust test error (RTE), we also report the upper bound (URTE) to show that it is very close to RTE.\nNotably, for our robust stumps trained with the upper bound on the robust loss, URTE is equal to the\nRTE for all models, and it is very close to the RTE of our robust stumps trained with the exact robust\nloss, while taking about 4x less time to train in average. Thus bounding the sum over weak learners\nelement-wise, as done in (6), seems to be tight enough to yield robust models. Finally, we provide in\nAppendix G.2 a more detailed comparison to the robust boosted stumps of Chen et al. [9].\n\nTable 1: Evaluation of robustness for boosted stumps. We show, in percentage, test error (TE), exact robust test\nerror (RTE), and upper bound on robust test error (URTE). Both variants of our robust stumps outperform the\nmethod of Chen et al. [9]. We also observe that URTE is very close to RTE or even the same for many models.\nOur robust stumps\n(exact robust loss)\nTE RTE URTE\n5.1 10.9\n10.9\n31.8\n27.3 31.8\n22.6\n11.2 22.6\n0.7 3.6\n3.7\n3.0 9.2\n9.2\n5.7 10.8\n11.5\n2.0 6.7\n6.7\n12.9 27.6\n27.6\n\nOur robust stumps\n(robust loss bound)\nTE RTE URTE\n4.4 10.9\n10.9\n33.1\n28.6 33.1\n11.2 22.4\n22.4\n3.7\n0.6 3.7\n3.0 9.1\n9.1\n11.8\n6.2 11.8\n2.8 8.9\n8.9\n12.7 26.9\n26.9\n\nNormal stumps\n(standard training)\nTE RTE URTE\n2.9 98.5\n100\n56.5\n24.7 54.5\n4.7 42.8\n44.9\n0.5 85.4\n85.4\n1.7 99.9\n99.9\n2.4 100\n100\n1.1 9.9\n9.9\n11.3 53.7\n53.7\n\nRobust stumps\nChen et al. [9]\nTE\n8.8\n23.4\n11.6\n0.9\n2.8\n7.1\n2.0\n12.7\n\nRTE\n16.8\n30.5\n23.2\n5.2\n13.9\n22.2\n11.8\n28.2\n\nDataset\nl1 \u270f\nbreast-cancer\n0.3\ndiabetes\n0.05\ncod-rna\n0.025\nMNIST 1-5\n0.3\nMNIST 2-6\n0.3\nFMNIST shoes 0.1\nGTS 100-rw\nGTS 30-70\n\n8/255\n8/255\n\n7\n\n\fTable 2: Evaluation of robustness for boosted trees. We report, in percentages, test error (TE), robust test error\n(RTE), and upper bound on robust test error (URTE). Our robust boosted trees lead to better RTE compared to\nadversarial training and robust trees of Chen et al. [9]. We observe that especially for our models URTE are very\nclose to RTE, while URTE are orders of magnitude faster to compute.\n\nDataset\nl1 \u270f\n0.3\nbreast-cancer\n0.05\ndiabetes\n0.025\ncod-rna\n0.3\nMNIST 1-5\nMNIST 2-6\n0.3\nFMNIST shoes 0.1\nGTS 100-rw\nGTS 30-70\n\n8/255\n8/255\n\nNormal trees\n\n(standard training)\nTE RTE URTE\n81.8\n0.7 81.0\n61.7\n22.7 55.2\n3.4 37.6\n47.1\n0.1 90.7\n96.0\n0.4 89.6\n100\n1.7 99.8\n99.9\n0.9 6.0\n6.1\n14.2 31.4\n32.6\n\nAdv. trained trees\n(with cube attack)\nTE RTE URTE\n0.0 27.0\n27.0\n46.8\n26.6 46.8\n10.9 24.8\n24.8\n9.5\n1.3 9.0\n15.9\n2.3 15.1\n14.2\n5.5 14.1\n8.4\n1.0 8.4\n16.2 26.7\n26.8\n\nRobust trees\nChen et al. [9]\nTE\n0.7\n22.1\n10.2\n0.3\n0.5\n3.1\n1.5\n11.5\n\nRTE\n13.1\n40.3\n24.2\n2.9\n6.9\n13.2\n9.7\n28.8\n\nOur robust trees\n(robust loss bound)\nTE RTE URTE\n0.7 6.6\n27.3 35.7\n6.9 21.3\n0.2 1.3\n0.7 3.8\n3.6 8.0\n2.6 4.7\n13.8 20.9\n\n6.6\n35.7\n21.4\n1.4\n4.1\n8.1\n4.7\n21.4\n\nBoosted decision trees The results for boosted trees of depth 4 are given in Table 2. Our robust\ntraining of boosted trees outperforms both adversarial training and the method of Chen et al. [9] in\nterms of RTE on all 8 datasets. For example, on breast-cancer, the RTE of the robust trees of Chen\net al. [9] is 13.1%, while the RTE of our robust model is 6.6% and we achieve the same test error\nof 0.7%. We note that TE and RTE of our robust trees are in many cases better than for our robust\nstumps. This suggests that there is a bene\ufb01t of using more expressive weak learners in boosting to get\nmore robust and accurate models. Adversarial training performs worse than our provable defense\nnot only in terms of URTE, but even in terms of LRTE. This is different from the neural network\nliterature [45, 25], where adversarial training usually provides better LRTE and signi\ufb01cantly better\ntest error than methods providing provable robustness guarantees. However, our upper bound on\nthe robust loss is tight and tractable and thus adversarial training should not be used as it provides\nonly a lower bound and minimization of an upper bound makes more sense than minimization of a\nlower bound. We provide a more detailed comparison to Chen et al. [9] in Appendix G.2 including\nmulti-class datasets (MNIST, FMNIST). We also show there that our proposed method to calculate\nthe certi\ufb01ed robust error (URTE) is orders of magnitudes faster than the MIP formulation.\n\nComparison to provable defenses for neural networks We note that our methods are primarily\nsuitable for tabular data, but in the literature on robustness of neural networks there are no established\ntabular datasets to compare to. Thus, we compare our robust boosted trees to the convolutional\nnetworks of [73, 16, 75, 25, 13] on MNIST, FMNIST, and CIFAR-10. We do not compare to random-\nized smoothing since it is competitive only for small l1-balls [57]. Since the considered datasets\nare multi-class, we extend our training of robust boosted trees from the binary classi\ufb01cation case to\nmulti-class using the one-vs-all approach described in Appendix E. We also use data augmentation\nby shifting the images by one pixel horizontally and vertically. We \ufb01t our robust trees with depth of\nup to 30 for MNIST and FMNIST, and with depth of up to 4 for CIFAR-10. Note that we restrict the\nminimum number of examples in a leaf to 100. Thus a tree of depth 30 makes only a small fraction of\nthe possible 230 splits. We provide a comparison in Table 3. In terms of provable robustness (URTE),\nour method is competitive to many provable defenses for CNNs. In particular, we outperform the\nLP-relaxation approach of [73] on all three datasets both in terms of test error and upper bounds. We\nalso systematically outpeform the recent approach of [75] aiming at enhancing veri\ufb01ability of CNNs\n\u2013 we have a better URTE with the same or better test error. Only the recent work of [25] is able to\noutperform our approach. Also, the CIFAR-10 model of [16] shows better URTE than our approach,\nbut worse test error. We would like to emphasize that even on CIFAR-10 (with a relatively large\n\u270f = 8/255) our models are not too far away from the state-of-the-art. In addition our robust boosted\ntree models require less computations at inference time.\n\nRobustness vs accuracy tradeoff There is a lot of empirical evidence that robust training methods\nfor neural networks exhibit a trade-off between robustness and accuracy [73, 25, 64]. We can con\ufb01rm\nthat the trade-off can also be observed for boosted trees: we consistently lose accuracy once we\nincrease \u270f. The only slight gain in accuracy that we observe is on FMNIST shoes dataset. More\ndetails and plots of robustness versus accuracy can be found in Appendix G.4.\n\n8\n\n\fTable 3: Comparison of our robust boosted trees to the state-of-the-art provable defenses for convolutional\nneural networks reported in the literature. Our models are competitive to them in terms of upper bounds on\nrobust test error (URTE). By \u21e4 we denote results taken from [25] where they could achieve signi\ufb01cantly better\nTE and URTE with the code of [73].\n\nTE\n\nLRTE\n\nDataset\n\nl1 \u270f\n\nMNIST\n\n0.3\n\nFMNIST\n\n0.1\n\nCIFAR-10\n\n8/255\n\nApproach\nWong et al. [73]\u21e4\nXiao et al. [75]\nOur robust trees, depth 30\nGowal et al. [25]\nWong and Kolter [72]\nCroce et al. [13]\nOur robust trees, depth 30\nXiao et al. [75]\nWong et al. [73]\nOur robust trees, depth 4\nDvijotham et al. [16]\nGowal et al. [25]\n\n7.95%\n\n6.12%\n\nURTE\n13.52% 26.16% 26.92%\n2.67%\n19.32%\n2.68% 12.46% 12.46%\n8.05%\n1.66%\n21.73% 31.63% 34.53%\n14.50% 26.60% 30.70%\n14.15% 23.17% 23.17%\n59.55% 73.22% 79.73%\n71.33%\n78.22%\n58.46% 74.69% 74.69%\n59.38% 67.68% 70.79%\n50.51% 65.23% 67.96%\n\n\u2013\n\nNormal trees\n\nAdversarially trained trees\n\nOur robust trees\n\nFigure 3: The distribution of the splitting thresholds for boosted trees models trained on MNIST 2-6. We can\nobserve that our robust model almost always selects splits in the range between 0.3 and 0.7, which is reasonable\ngiven l1-perturbations within \u270f = 0.3. At the same time, the normal and adversarially trained models split close\nto 0 or 1, which suggests that their decisions might be easily \ufb02ipped by the adversary.\n\nInterpretability For boosted stumps or trees, unlike for neural networks, we can directly inspect\nthe model and the classi\ufb01cation rules it has learned. In particular, in Figure 3, we plot the distibution\nof the splitting thresholds b for the three boosted trees models on MNIST 2-6 reported in Table 2.\nWe can observe that our robust model almost always selects splits in the range between 0.3 and 0.7,\nwhich is reasonable given that more than 80% pixels of MNIST are either 0 or 1, and the considered\nl1-perturbations are within \u270f = 0.3. At the same time, the normal and adversarially trained models\nsplit arbitrarily close to 0 or 1, which suggests that their decisions might be easily \ufb02ipped if the\nadversary is allowed to change them within this \u270f. To emphasize the importance of interpretability\nand transparent decision making, we provide feature importance plots and more histograms of the\nsplitting thresholds in Appendix G.5 and G.6.\n\n6 Conclusions and Outlook\n\nOur results show that the proposed methods achieve state-of-the-art provable robustness among\nboosted stumps and trees, and are also competitive to provably robust CNNs. This can be seen\nas a strong indicator that particularly for large l1-balls, current provably robust CNNs are so\nover-regularized that their performance is comparable to simple decision tree ensembles that make\ndecisions based on individual pixel values. Thus it remains an open research question whether it is\npossible to establish tight and tractable upper bounds on the robust loss for neural networks. On the\ncontrary, as shown in this paper, for boosted decision trees there exist simple and tight upper bounds\nwhich can be ef\ufb01ciently optimized. Moreover, for boosted decision stumps one can compute and\noptimize the exact robust loss. We thus think that if provable robustness is the goal then our robust\ndecision stumps and trees are a promising alternative as they not only come with tight robustness\nguarantees but also are much easier to interpret.\n\n9\n\n\fAcknowledgements\n\nWe thank the anonymous reviewers for very helpful and thoughtful comments. We acknowledge the\nsupport from the German Federal Ministry of Education and Research (BMBF) through the T\u00fcbingen\nAI Center (FKZ: 01IS18039A). This work was also supported by the DFG Cluster of Excellence\n\u201cMachine Learning \u2013 New Perspectives for Science\u201d, EXC 2064/1, project number 390727645, and\nby DFG grant 389792660 as part of TRR 248.\n\nReferences\n[1] Anish Athalye, Nicholas Carlini, and David A. Wagner. Obfuscated gradients give a false sense of security:\n\ncircumventing defenses to adversarial examples. ICML, 2018.\n\n[2] Dimitris Bertsimas and Jack Dunn. Optimal classi\ufb01cation trees. Machine Learning, 2017.\n[3] Dimitris Bertsimas, Jack Dunn, Colin Pawlowski, and Ying Daisy Zhuo. Robust classi\ufb01cation. INFORMS\n\nJournal on Optimization, 1:2\u201334, 2018.\n\n[4] Stephen Boyd and Lieven Vandenberghe. Convex optimization. Cambridge university press, 2004.\n[5] Leo Breiman, Jerome Friedman, Richard Olshen, and Charles Stone. Classi\ufb01cation and regression trees.\n\nChapman & Hall/CRC, 1984.\n\n[6] Wieland Brendel, Jonas Rauber, and Matthias Bethge. Decision-based adversarial attacks: reliable attacks\n\nagainst black-box machine learning models. ICLR, 2018.\n\n[7] Jacob Buckman, Aurko Roy, Colin Raffel, and Ian Goodfellow. Thermometer encoding: one hot way to\n\nresist adversarial examples. ICLR, 2018.\n\n[8] Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks.\n\nSymposium on Security and Privacy, 2017.\n\nIEEE\n\n[9] Hongge Chen, Huan Zhang, Duane Boning, and Cho-Jui Hsieh. Robust decision trees against adversarial\n\nexamples. ICML, 2019.\n\n[10] Tianqi Chen and Carlos Guestrin. Xgboost: a scalable tree boosting system. KDD, 2016.\n[11] Minhao Cheng, Thong Le, Pin-Yu Chen, Jinfeng Yi, Huan Zhang, and Cho-Jui Hsieh. Query-ef\ufb01cient\n\nhard-label black-box attack: an optimization-based approach. ICLR, 2019.\n\n[12] Jeremy M Cohen, Elan Rosenfeld, and J Zico Kolter. Certi\ufb01ed adversarial robustness via randomized\n\nsmoothing. ICML, 2019.\n\n[13] Francesco Croce, Maksym Andriushchenko, and Matthias Hein. Provable robustness of relu networks via\n\nmaximization of linear regions. AISTATS, 2019.\n\n[14] Stefan Droste, Thomas Jansen, and Ingo Wegener. On the analysis of the (1+1) evolutionary algorithm.\n\nTheoretical Computer Science, 2002.\n\n[15] Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive.ics.uci.\n\nedu/ml.\n\n[16] Krishnamurthy Dvijotham, Sven Gowal, Robert Stanforth, Relja Arandjelovic, Brendan O\u2019Donoghue,\nJonathan Uesato, and Pushmeet Kohli. Training veri\ufb01ed learners with learned veri\ufb01ers. arXiv preprint\narXiv:1805.10265, 2018.\n\n[17] Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. Hot\ufb02ip: white-box adversarial examples for\n\ntext classi\ufb01cation. ACL, 2018.\n\n[18] Logan Engstrom, Andrew Ilyas, and Anish Athalye. Evaluating and understanding the robustness of\n\nadversarial logit pairing. NeurIPS 2018 Workshop on Security in Machine Learning, 2018.\n[19] Yoav Freund. A more robust boosting algorithm. arXiv preprint, arXiv:0905.2138v1, 2009.\n[20] Yoav Freund and Robert E Schapire. Experiments with a new boosting algorithm. ICML, 1996.\n[21] Jerome Friedman. Greedy function approximation: A gradient boosting machine. The Annals of Statistics,\n\n29:1189\u20131232, 2001.\n\n[22] Jerome Friedman. Stochastic gradient boosting. Computational statistics & data analysis, 38:367\u2013378,\n\n2002.\n\n[23] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Additive logistic regression: a statistical view of\n\nboosting. Annals of Statistics, 28:337\u2013407, 2000.\n\n[24] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples.\n\nICLR, 2015.\n\n10\n\n\f[25] Sven Gowal, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan Uesato,\nTimothy Mann, and Pushmeet Kohli. On the effectiveness of interval bound propagation for training\nveri\ufb01ably robust models. NeurIPS Workshop on Security in Machine Learning, 2018.\n\n[26] Chuan Guo, Jacob R Gardner, Yurong You, Andrew Gordon Wilson, and Kilian Q Weinberger. Simple\n\nblack-box adversarial attacks. ICML, 2019.\n\n[27] LLC Gurobi Optimization. Gurobi optimizer reference manual, 2019. URL http://www.gurobi.com.\n[28] Matthias Hein and Maksym Andriushchenko. Formal guarantees on the robustness of a classi\ufb01er against\n\nadversarial manipulation. NeurIPS, 2017.\n\n[29] Matthias Hein, Maksym Andriushchenko, and Julian Bitterwolf. Why relu networks yield high-con\ufb01dence\n\npredictions far away from the training data and how to mitigate the problem. CVPR, 2019.\n\n[30] Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. Black-box adversarial attacks with limited\n\nqueries and information. ICML, 2018.\n\n[31] Harini Kannan, Alexey Kurakin, and Ian Goodfellow. Adversarial logit pairing.\n\narXiv:1803.06373, 2018.\n\narXiv preprint\n\n[32] Alex Kantchelian, JD Tygar, and Anthony Joseph. Evasion and hardening of tree ensemble classi\ufb01ers.\n\nICML, 2016.\n\n[33] Guy Katz, Clark Barrett, David L Dill, Kyle Julian, and Mykel J Kochenderfer. Reluplex: an ef\ufb01cient smt\n\nsolver for verifying deep neural networks. ICCAV, 2017.\n\n[34] Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu.\n\nLightgbm: A highly ef\ufb01cient gradient boosting decision tree. NeurIPS, 2017.\n\n[35] Alex Krizhevsky. Learning multiple layers of features from tiny images. Technical report, University of\n\nToronto, 2009.\n\n[36] Bogdan Kulynych, Jamie Hayes, Nikita Samarin, and Carmela Troncoso. Evading classi\ufb01ers in discrete\ndomains with provable optimality guarantees. NeurIPS Workshop on Security in Machine Learning, 2018.\n[37] Maksim Lapin, Matthias Hein, and Schiele Bernt. Loss functions for top-k error: analysis and insights.\n\nCVPR, 2016.\n\n[38] Maksim Lapin, Matthias Hein, and Schiele Bernt. Analysis and optimization of loss functions for multiclass,\n\ntop-k and multilabel classi\ufb01cation. PAMI, 40:1533\u20131554, 2016.\n\n[39] Hya\ufb01l Laurent and Ronald L Rivest. Constructing optimal binary decision trees is np-complete. Information\n\nProcessing Letters, 1976.\n\n[40] Yann LeCun. The mnist database of handwritten digits. http://yann.lecun.com/exdb/mnist/, 1998.\n[41] Mathias Lecuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel Hsu, and Suman Jana. Certi\ufb01ed robustness\n\nto adversarial examples with differential privacy. IEEE Symposium on Security and Privacy, 2019.\n\n[42] Bai Li, Changyou Chen, Wenlin Wang, and Lawrence Carin Duke. Certi\ufb01ed adversarial robustness with\n\naddition gaussian noise. NeurIPS, 2019.\n\n[43] Jiajun Lu, Hussein Sibai, Evan Fabry, and David Forsyth. No need to worry about adversarial examples in\n\nobject detection in autonomous vehicles. arXiv preprint arXiv:1707.03501, 2017.\n\n[44] Roman Werner Lutz, Markus Kalisch, and Peter B\u00fchlmann. Robusti\ufb01ed l2 boosting. Computational\n\nStatistics & Data Analysis, 52:3331\u20133341, 2008.\n\n[45] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards\n\ndeep learning models resistant to adversarial attacks. ICLR, 2018.\n\n[46] Seungyong Moon, Gaon An, and Hyun Oh Song. Parsimonious black-box adversarial attacks via ef\ufb01cient\n\ncombinatorial optimization. ICML, 2019.\n\n[47] Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: a simple and accurate\n\nmethod to fool deep neural networks. CVPR, 2016.\n\n[48] Marius Mosbach, Maksym Andriushchenko, Thomas Trost, Matthias Hein, and Dietrich Klakow. Logit\npairing methods can fool gradient-based attacks. NeurIPS 2018 Workshop on Security in Machine Learning,\n2018.\n\n[49] Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. Do deep\n\ngenerative models know what they don\u2019t know? ICLR, 2019.\n\n[50] Yurii Nesterov and Vladimir Spokoiny. Random gradient-free minimization of convex functions. Founda-\n\ntions of Computational Mathematics, 17:527\u2013566, 2017.\n\n[51] A. Nguyen, J. Yosinski, and J. Clune. Deep neural networks are easily fooled: high con\ufb01dence predictions\n\nfor unrecognizable images. CVPR, 2015.\n\n11\n\n\f[52] Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. Transferability in machine learning: from\n\nphenomena to black-box attacks using adversarial samples. arXiv preprint, arXiv:1809.03008, 2016.\n\n[53] Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami.\n\nThe limitations of deep learning in adversarial settings. IEEE EuroS&P, 2016.\n\n[54] Aditi Raghunathan, Jacob Steinhardt, and Percy Liang. Certi\ufb01ed defenses against adversarial examples.\n\nICLR, 2018.\n\n[55] Ryan Rifkin and Aldebaro Klautau. In defense of one-vs-all classi\ufb01cation. Journal of Machine Learning\n\nResearch, 5:101\u2013141, 2004.\n\n[56] Paolo Russu, Ambra Demontis, Battista Biggio, Giorgio Fumera, and Fabio Roli. Secure kernel machines\n\nagainst evasion attacks. ACM workshop on AI and security, 2016.\n\n[57] Hadi Salman, Greg Yang, Jerry Li, Pengchuan Zhang, Huan Zhang, Ilya Razenshteyn, and Sebastien\n\nBubeck. Provably robust deep learning via adversarially trained smoothed classi\ufb01ers. NeurIPS, 2019.\n\n[58] Robert E Schapire and Yoram Singer. Improved boosting algorithms using con\ufb01dence-rated predictions.\n\nMachine Learning, 1999.\n\n[59] Uri Shaham, Yutaro Yamada, and Sahand Negahban. Understanding adversarial training: Increasing local\n\nstability of supervised models through robust optimization. Neurocomputing, 2018.\n\n[60] Jack W Smith, JE Everhart, WC Dickson, WC Knowler, and RS Johannes. Using the adap learning\nalgorithm to forecast the onset of diabetes mellitus. Annual Symposium on Computer Application in\nMedical Care, 1988.\n\n[61] Johannes Stallkamp, Marc Schlipsing, Jan Salmen, and Christian Igel. Man vs. computer: benchmarking\n\nmachine learning algorithms for traf\ufb01c sign recognition. Neural Networks, 2012.\n\n[62] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and\n\nRob Fergus. Intriguing properties of neural networks. ICLR, 2014.\n\n[63] Vincent Tjeng, Kai Xiao, and Russ Tedrake. Evaluating robustness of neural networks with mixed integer\n\nprogramming. ICLR, 2019.\n\n[64] Dimitris Tsipras, Shibani Santurkar, Logan Engstrom, Alexander Turner, and Aleksander Madry. Robust-\n\nness may be at odds with accuracy. ICLR, 2019.\n\n[65] Andrew V Uzilov, Joshua M Keegan, and David H Mathews. Detection of non-coding rnas on the basis of\n\npredicted secondary structure formation free energy change. BMC Bioinformatics, 7:1\u201330, 2006.\n\n[66] Paul Viola and Michael Jones. Rapid object detection using a boosted cascade of simple features. CVPR,\n\n2001.\n\n[67] Paul Viola and Michael J Jones. Robust real-time face detection. IJCV, 57:137\u2013154, 2004.\n[68] Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. Ef\ufb01cient formal safety analysis\n\nof neural networks. NeurIPS, 2018.\n\n[69] Yizhen Wang, Somesh Jha, and Kamalika Chaudhuri. Analyzing the robustness of nearest neighbors to\n\nadversarial examples. ICML, 2018.\n\n[70] Manfred K. Warmuth, Karen Glocer, and Gunnar R\u00e4tsch. Boosting algorithms for maximizing the soft\n\nmargin. NeurIPS, 2007.\n\n[71] Tsui-Wei Weng, Huan Zhang, Hongge Chen, Zhao Song, Cho-Jui Hsieh, Duane Boning, Inderjit S. Dhillon,\n\nand Luca Daniel. Towards fast computation of certi\ufb01ed robustness for relu networks. ICML, 2018.\n\n[72] Eric Wong and Zico Kolter. Provable defenses against adversarial examples via the convex outer adversarial\n\npolytope. ICML, 2018.\n\n[73] Eric Wong, Frank Schmidt, Jan Hendrik Metzen, and J Zico Kolter. Scaling provable adversarial defenses.\n\nNeurIPS, 2018.\n\n[74] Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking\n\nmachine learning algorithms. arXiv preprint arXiv:1708.07747, 2017.\n\n[75] Kai Y Xiao, Vincent Tjeng, Nur Muhammad Sha\ufb01ullah, and Aleksander Madry. Training for faster\n\nadversarial robustness veri\ufb01cation via inducing relu stability. ICLR, 2019.\n\n[76] H. Xu, C. Caramanis, and S. Mannor. Robustness and regularization of support vector machines. Journal\n\nof Machine Learning Research, 10:1485\u20131510, 2009.\n\n[77] Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, and Luca Daniel. Ef\ufb01cient neural network\n\nrobustness certi\ufb01cation with general activation functions. NeurIPS, 2018.\n\n12\n\n\f", "award": [], "sourceid": 7134, "authors": [{"given_name": "Maksym", "family_name": "Andriushchenko", "institution": "University of T\u00fcbingen / EPFL"}, {"given_name": "Matthias", "family_name": "Hein", "institution": "University of T\u00fcbingen"}]}