NeurIPS 2020

MPNet: Masked and Permuted Pre-training for Language Understanding


Meta Review

This paper proposes a new approach to self-supervised pretraining on text, building very closely on prior work in BERT and XLNet. The paper demonstrates that the approach yields models that are noticably better than comparable models from prior work, and isolates the reason for this in a reasonably thorough ablation. While one reviewer raises concerns about the quality of comparisons, I'm convinced that they are already pretty much up to the standards of the field, and will be fully satisfactory after the promised revisions/additions. While this work is perhaps somewhat incremental, the method is effective and relevant to a highly prominent open problem in ML for text.