NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:6944
Title:Leveraging Labeled and Unlabeled Data for Consistent Fair Binary Classification

Reviewer 1


		
While I personally am not very excited by yet another algorithm-for-group-fairness-in-binary-classification paper, this work does fill in a real gap in the literature. I did not check proofs carefully, but otherwise this work is pretty well written and appears correct. This work does perform one unfortunate sleight-of-hand by starting with exact fairness and giving Bayes optimal for this exact case, but then only proving that their algorithm is exactly fair in the limit. While exact fairness won't be possible on the distribution, it should be on the sample at least. Relatedly, I would prefer to see the finite sample results they claim to have, but do not show, even if those results are highly dependent on \eta. Assumption 2.2 seems, regardless of whether it's been used before, difficult to buy as a reasonable assumption. In practice, there may only be finitely many values of X with non-zero support, and therefore you'll have non-zero probability mass on the events in question. I would guess that removing the assumption would make the results messier, but it should still be possible to write down without this assumption. Finally, the experiments left me more puzzled than satisfied, as their claim was that their method was preferable to the other top performer because it was more general and could be applied to RF, which was indeed one of the best algorithms on these datasets for minimizing error. But RF is clearly more difficult to get to be fair, because on at least some of the data sets, the RF+post-processing methods were not top performers. Other things: -There are quite a few tyos/grammatical issues, starting with lines 34, 38, 65, 72, 87, 90, 93, etc. -line 88: \mathcal{G} is not a set Edit after author response: Thanks for addressing my concerns in detail.

Reviewer 2


		
Originality: -Although the Hardt paper has suggested the use of this approach, the paper claims that it’s the first to actually show this is indeed possible. It's the first to decouple the risk and fairness in terms of regression function and base rate. -Use of unlabeled data to help facilitate fairness along with better accuracy is novel. Quality: -The assumptions made about the model are very well justified. The discussion after each assumption provided the context as to why the assumption makes sense and why the assumption is needed to study their model. These discussion as a result provided very good intuition and set up the stage for the proof. -The results in the appendix are quite strong, too. Clarity: -Overall, the paper have a very smooth flow, whether it be discussion of their assumptions or their remarks. -The proof sketch is thorough yet simple enough to understand the gist intuitively. -Once again, the remarks are very helpful. For instance, they expect the natural follow-up questions and answer them very well (e.g. why asymptotic convergence instead of the rate of convergence). Even in the appendix, they provide remarks as to how to read (e.g. make sure to understand some preliminary concepts before proceeding the proof). -The paper follows the neurips guideline very strictly: transparency in terms of hyper parameters are chosen, confidence intervals for the accuracy and fairnes, etc. -It's not a big deal, but the figures are somewhat hard to read; one suggestion is to change the highlight yellow color (Adult Yellow) to something else. Significance: -There's already a huge literature on plug-in approaches. The paper connects the fairness literature with that literature. -The proposed approach of decoupling things into regression function and base rate may be useful for other notions of fairness as well. -Empirically, their methods stay competitive, if not outperform the current state-of-art. -Also, in the appendix, it discusses what the optimal classifier without sensitive feature looks like, although the proof of consistency of the empirical estimator is not provided. Once again, they validate their results in this case empirically. Question: -Isn't it pretty common to assume bounded Rademacher complexity (as in Agarwal et al.) to argue about uniform convergence (generalization error) of the classifier? ***** POST REBUTTAL ****** I am also concerned about the non-trivial changes included in their response, and I'm not too excited about another group fairness paper. However, I think it's still well-written and and techniques are somewhat interesting. Therefore, I'll keep my score at 8.

Reviewer 3


		
This paper deals with the problem of enforcing Equal Opportunity (Hardt et al.) while leveraging both labeled and unlabeled data. It takes as given a trained black-box classifier and enforces Equal Opportunity on top of that classifier using a threshold rule. Notably, this paper assumes that there are only two sensitive groups, and it is unclear how to extend it to work with multiple groups (see Improvements). Originality: As far as I know, up until now, most methods for enforcing Equal Opportunity (and similar fairness metrics that require labels) have only been able to make use of labeled data at training time. This work provides a novel way to use both labeled and unlabeled data at training time. Quality: The theoretical results in this paper seem good and relevant. It is not ideal that the authors only provide asymptotic results though, and it would be nice to have at least a verbal description about how the amount of unlabeled data N affects the convergence properties. The experiments also appear to be run well with appropriate comparisons to recent methods. However, I would like to see more comparisons with direct constrained optimization approaches that work on nonlinear models, such as Cotter et al. [12] or Agarwal et al [2], since this paper only includes comparison to a direct constrained optimization approach (Zafar [48]) on a linear model. Clarity: The paper is organized clearly with generally easily understandable notation. There are a few 'remarks' which I would prefer to see fleshed out more in the main body of the paper, such as remark 3.3. Significance: In practice, unlabeled data is often easier to come by than labeled data -- in fact, unlabeled data may come easily in very large quantities, possibly orders of magnitude larger than labeled data. Therefore, if this paper is able to effectively leverage unlabeled data to do a better job of enforcing Equal Opportunity, it can have great practical benefits. Unfortunately, the paper only provides asymptotic theoretical results, so it's unclear how much the convergence rate benefits from the amount of unlabeled data (N) compared to the amount of labeled data (n). It would be really nice to see how these two terms interact in the convergence rate (also mentioned in Improvements). Overall, I think this paper provides a novel methodology that adds value over previous methods. The main factors holding me back from a higher score are the lack of theoretical results about the effects of the size of N (e.g. a convergence rate that includes N), and the lack of empirical results that illustrate the effects of varying the size of N, even in the absence of a theoretical convergence rate.