NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
The authors study fast approximations to the "full" conformal prediction set using homotype-type algorithms for penalized convex problems. There are some interesting ideas here, and the reviewers were mostly positive, but pointed out a number of relevant critiques. All in all, I'm leaning towards the favorable, however I have two concerns that the authors really must address in a camera-ready version. First the experiments need to improve. Claiming to not run comparisons against efficient ridge [18] or efficient lasso [11] because these problems are already solved is not a good excuse. It is of course instructive to also give examples of your algorithm where it is not ideal so that we can see how much it loses to exact methods and therefore gain an understanding of what's happening with the approximations. Plus the experiments need to improve in general, as pointed out by Reviewer 4. Second, the paper's explanations are a bit confusing to me in places. For example, even the basic explanation of conformal prediction needs to improve. I understand this methodology well so I understood what the authors are going for, but a reader not familiar with conformal would be lost. Nowhere is Rank() defined (rank with respect to **what** set? of course this is critical). And references to [5] and [13] are completely unecessary and need to be removed. It should be explained **from first principles** what is going on here, not references books/papers on exchangeability or testing.