NeurIPS 2020

Model Rubik’s Cube: Twisting Resolution, Depth and Width for TinyNets


Meta Review

The paper received mixed ratings: two reviewers recommend acceptance, and two reviewers consider the paper is marginally below the threshold. All reviewers agree that the paper provides useful insights, e.g., the observation that resolution and depth are more important than width for tiny networks. The main concerns raised by the reviewers were (i) novelty is not highly significant/the method is too heuristic (ii) issues with experiments and lack of analysis on other tasks, such as object detection. The rebuttal helped clarify several other questions raised by the reviewers, and included new experiments on COCO object detection using Faster-RCNN. All reviewers actively participated in the discussion phase. R3 remained concern about the search efficiency of the method with respect to other alternatives such as FBNetv2, and pointed out issues with the reported results for EfficientNetB0 (Table 3). R4 remained concerned about the generalization of the approach for detection when faster methods such as SSD or YOLO are used. While the concerns raised by R3 and R4 are legitimate, the AC (after a discussion with SAC and another AC) agrees with R1 and R2 that the paper passes the acceptance bar of NeurIPS. The inverse formula to scale down EfficientNet has value to the community. The results are strong, especially when considering low flops (despite the fact flops may not be a good proxy for actual speed). It would be desirable to see the performance of the method in tandem with efficient detectors such as YOLO or SSD, but the AC considers the experimental analysis in the paper is sufficient. The authors should clarify the reported EfficientNet B0 results in the final version, but this is not a major issue given the results on the low flop regime. Please also make sure to add the discussion in the rebuttal to the camera-ready version.