The focus of the submission is backdoor attacks in federated learning. The authors 1) show that models prone to adversarial corruptions are also vulnerable to backdoor attacks, 2) prove that detecting backdoors can be hard, and 3) propose a new class of backdoor attacks called edge-case backdoors. The theoretical contributions are accompanied with extensive evaluation of the new backdoor attack on challenging datasets. The paper is technically sound, it focuses on a current topic of machine learning and delivers both important theoretical insights and new algorithmic tools. It can be of interest to the NeurIPS audience.