NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Reviewer 1
The approach presented in this paper is interesting and it seems that the portfolio selection approach naturally fits into the selective classification network. The paper is well motivated, well-written, clear, and self-contained. The reason I chose this score (marginally below the accept threshold) is because of the following two reasons: 1.The approach you suggested (adding a new class) is very similar to the selectivenet approach, and as we can observe in the experimental part, your approach leads only to marginal improvement. 2. Your approach is indeed simpler than selective net. However, the table on page 2 is a bit misleading. The gain of selective net, as explained in their paper, is due of the fact that for a given coverage the net needs to be retrained. Summarizing the above, the contribution over the SOTA (selectivenet) is marginal.
Reviewer 2
This paper tackles an important problem; methods aiming to improve the models' ability to express uncertainty are of high importance to the field. The proposed method is simple, flexible and seemingly novel. The method does not cover/evaluate the regression problems which might limit its impact scope. Even though the obtained results are not outperforming the state-of-the-art, this line of research on incorporating the portfolio theory to the confidence estimation is interesting and can inspire future studies. The paper is sometimes difficult to follow and presentation quality of the paper and its clarity can be improved.
Reviewer 3
Originality: this is in my opinion a quite strong point of the paper, that nicely bridges two different theories and brings an interesting interpretation to rejection samples and their nature within the learning theory. Quality is good, even if some typos remain here and there (L114,L156,L200,L272) Clarity: the authors often speak of assessing the uncertainty associated to a sample, however what they provide is a scalar-evaluation allowing to assess an overall confidence we can attach to a given instance being properly classified (and then possibly rejecting it according to our policy). We remain far from a full assessment of the uncertainty as would give a calibrated probability distribution or a gradual conformal prediction (that would go from the null set to the totally empty one). It could be useful if authors were clearer about that from the start. Significance: from the paper, I had the feeling that theorems 1 and 2 are quite direct trasnposition of known results, while theorem 3 is kind of obvious (if every horse give me more than m times my money, then for sure a uniform bet on them will make me win without needing any saving, and if I cannot win more than one by betting on any of them, it is of course better to keep my money). So it could be argued that those main results are in fact expected. Would authors agree with that? This said, I think the bridges that are made are sufficient in significance by thmeselves.