{"title": "Bayesian Adversarial Learning", "book": "Advances in Neural Information Processing Systems", "page_first": 6892, "page_last": 6901, "abstract": "Deep neural networks have been known to be vulnerable to adversarial attacks, raising lots of security concerns in the practical deployment. Popular defensive approaches can be formulated as a (distributionally) robust optimization problem, which minimizes a ``point estimate'' of worst-case loss derived from either per-datum perturbation or adversary data-generating distribution within certain pre-defined constraints. This point estimate ignores potential test adversaries that are beyond the pre-defined constraints. The model robustness might deteriorate sharply in the scenario of stronger test adversarial data. In this work, a novel robust training framework is proposed to alleviate this issue, Bayesian Robust Learning, in which a distribution is put on the adversarial data-generating distribution to account for the uncertainty of the adversarial data-generating process. The uncertainty directly helps to consider the potential adversaries that are stronger than the point estimate in the cases of distributionally robust optimization. The uncertainty of model parameters is also incorporated to accommodate the full Bayesian framework. We design a scalable Markov Chain Monte Carlo sampling strategy to obtain the posterior distribution over model parameters. Various experiments are conducted to verify the superiority of BAL over existing adversarial training methods. The code for BAL is available at \\url{https://tinyurl.com/ycxsaewr\n}.", "full_text": "\f\f\f\f\f\f0.98\n\n0.96\n\n0.94\n\n0.92\n\n0.9\n\n0.88\n\n0.86\n\n0.84\n\n0.82\n\n0.8\n\n0.78\n\n1\n\n0.05\n0.1\n0.12\n0.13\n0.15\n\n2\n\n3\n\n4\n\n5\n\n6\n\n7\n\n8\n\n9\n\n10\n\n\f\f\f\f", "award": [], "sourceid": 3442, "authors": [{"given_name": "Nanyang", "family_name": "Ye", "institution": "University of Cambridge"}, {"given_name": "Zhanxing", "family_name": "Zhu", "institution": "Peking University"}]}