NeurIPS 2020

Residual Distillation: Towards Portable Deep Neural Networks without Shortcuts

Meta Review

The new training scheme following the teacher-student paradigm to obtain comparable results to those of a resnet model, but without residual connections (shortcuts). Results are on par with SOTA and the approach is very interesting, although not necessarily very novel in principle (I encourage the authors to make this much clearer in the final text). All reviewers agree that this is a good contribution and that the rebuttal was helpful in reaching the final conclusion.