The paper proposes a novel loss called accuracy versus uncertainty calibration (AvUC) for improving uncertainty estimation in deep learning. The initial scores were borderline and reviewers raised a few concerns. I thank the authors for writing a thoughtful rebuttal with additional experiments that were appreciated by the reviewers. During the discussion, all the reviewers agreed that the author rebuttal addresses the major concerns and several of them increased scores as well. I have read the paper carefully as well and I recommend accept due to the following reasons: Pros: - Simplicity of the approach - Well-written paper - Extensive experimental results including calibration under dataset shift, OOD detection showing the benefits of the proposed approach - Accompanying code which should make it easier for folks in the community to build on this work Suggestions for camera ready: - Theoretical justification: the points in the rebuttal should be moved to the main text (especially connections to loss calibrated inference and justification as as a proper loss) - Experimental results: Some of the additional ablations requested were added during rebuttal, please add them to the main text as this strengthens the paper - There's a couple of other minor comments raised by the reviewers which weren't completely addressed in the rebuttal. But these are relatively minor and I encourage the authors to address them as well in the camera-ready.