NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:696
Title:Combinatorial Inference against Label Noise


		
This paper proposed a completely new way for learning with noisy labels: it smartly construct certain meta classes and learn classifiers to predict meta-class labels; the predictions of the original base-class labels can be inferred by combining the predictions on multiple meta-class spaces (each has several meta classes). As a result of this algorithm design, the noise level of label noise is empirically shown to be reduced in the meta-class space compared with the original base-class space (though this benefit is not theoretically guaranteed). The intuition behind the idea is that multiple base classes collapse to a single meta class and then the label noise within the same meta-class vanishes. The clarity, the novelty, and the significance are all above the corresponding thresholds of NeurIPS and thus it should clearly be accepted. The problem under consideration is of practical interests and may have huge impacts to our daily life (as mentioned in the intro, noisy labels are everywhere in the wild). This paper manipulates the output representation (which is the fitting target); it is slightly similar to label correction, but the gap between them is significant enough to make this paper stand for a new direction in label-noise learning. In order to address the broader audience in NeurIPS, I offer my quick thoughts on the paper (I didn't carefully check the full paper due to limited time): A. The title is too short and not sufficiently informative---the core concept "meta class" doesn't appear in the title now! In principle I shouldn't affect your choice of the title too much, since the title is most important for a paper, I strongly suggest you to consider a title that can reflect the intuition (i.e., multiple base classes collapse to a single meta class and then the label noise within the same meta-class vanishes). B. In the literature review, sample selection/reweighting methods and label correction methods are combined. While both of those two directions try to identify good data from noisy data, the former simply drops possibly bad data but the latter still tries to fix the labels of those bad data. This difference should be mentioned in the sections of introduction and related work. C. Two related papers following Co-teaching should be cited: Co-teaching+ (entitled "How does disagreement help generalization against label corruption?") and Pumpout (entitled "Pumpout: A meta approach to robust deep learning with noisy labels"). They went along the line of sample selection. D. There are some typos and grammatical issues. Please check the English, carefully, once more. [This meta-review was reviewed and revised by the Program Chairs]