Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
Conference Event Type: Poster
The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks. It maintains an exponential moving average of label predictions on each training example, and penalizes predictions that are inconsistent with this target. However, because the targets change only once per epoch, Temporal Ensembling becomes unwieldy when learning large datasets. To overcome this problem, we propose Mean Teacher, a method that averages model weights instead of label predictions. As an additional benefit, Mean Teacher improves test accuracy and enables training with fewer labels than Temporal Ensembling. Using the same network architecture, Mean Teacher achieves error rate 4.35% on SVHN with 250 labels, better than Temporal Ensembling does with 1000 labels. We show that Mean Teacher is compatible with residual networks, and improve state of the art on CIFAR-10 with 4000 labels from 10.55% to 6.28%. Our preliminary experiments also suggest a large improvement over state of the art on semi-supervised ImageNet 2012.