All 4 reviewers support acceptance for the contribution. I believe the contribution is original and intriguing enough to merit a spotlight. This summary from R4 shows how the work in this paper opens new possibilities in NLP, complementing powerful adaptable models such as GPT-3. “This paper shows that it is possible to adapt pretrained language models (LMs) on-the-fly based on natural language text in order to correct the model's behavior. When an LM would answer a question incorrectly, the authors supplement the model with a hint or relevant piece of evidence in the form of natural language text and find that the model is then able to produce the correct answer. This results are a proof of concept that large, black-box LMs can be adapted/corrected in a natural way / potentially by non-expert users of the system, simply by providing relevant natural language text.” The following issues that have been pointed by several reviewers, though only as an initial cause for rejection by R1: - Distractor examples: the initial write-up is quite confusing, especially as they are not our usual counter-examples, and most reviewers, myself included, did not understand their role initially. The authors have clarified their explanation, and offered to update the final version. - Synthetic data: after discussion, most reviewers agree that it is acceptable.