Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
The reviewers appreciated the good empirical results and theoretical analysis. This paper proposes to treat NAS problems as a selection problem among experts. Over time, it eliminates underperforming experts with a wipe-out step. As two of the reviewers pointed out, the theoretical analysis is interesting (and rare, in this type of paper), but it would be good to more explicitly spell out when and why the assumptions hold. Empirical performance seems good, but the authors should include error bars for at least the CIFAR-10 experiments and ideally the ImageNet ones as well. The architecture search also seems to be very fast compared to other methods (e.g. DARTS), but again, it would be good to more clearly spell out if this was done over multiple seeds and if the cost per seed is included in the total cost of the search (as done in the DARTS paper). Overall, this is a good paper. Two reviewers argued for acceptance, while the last reviewer gave a relatively low score. The rebuttal didn't sway this reviewer towards changing their score, but they recognized that the algorithm proposed in this paper is interesting and has practical value. As a result, they chose not to directly argue against acceptance and stated they would not be upset if the paper was accepted.