This paper was reviewed by four knowledgeable experts, none of whom have particularly strong opinions in either direction about the work. The main concerns of R1 regard the uncertainty learning baseline (which, in my opinion, the rebuttal addresses adequately, especially given the limited time), and extension to other tasks. I agree with the authors that extension of this approach to new tasks is more appropriate in a separate work, given the novelty of the approach and the application to crowd counting. The concerns of R2 mainly regard clarity of writing, which after close study I do not find to be so distracting -- although there is room for improvement. I find this paper to be a novel approach addressing a problem that is usually completely ignored in the crowd counting literature, and thus I think it is a refreshing break from the current trend in the area -- and a fairly detailed study of the problem of modeling annotation noise in crowd counting data (even if not *all* types of noise). As such, I think NeurIPS is a perfect venue for just such a paper and my final decision is to accept.