This paper focuses on the intersection between neural architecture search and domain adaptation. The proposal is to minimize the cross-domain generalization gap that generally exists in current neural architecture search (NAS) methods with proxy tasks. The philosophy behind sounds quite interesting to me. Namely, instead of directly using the target dataset for searching, which suffers from the high computation cost, the authors propose to improve the generalizability of neural architectures by leveraging a small portion of target samples via a domain adaptation technique. This philosophy leads to a novel algorithm design I have never seen, i.e., AdaptNAS. The clarity and novelty are clearly above the bar of NeurIPS. While the reviewers had some concerns on the significance, the authors did a particularly good job in their rebuttal. Thus, all of us have agreed to accept this paper for publication! Please carefully address R4' comments in the final version.