Part of Advances in Neural Information Processing Systems 32 (NeurIPS 2019)
Michael Wick, swetasudha panda, Jean-Baptiste Tristan
The prevailing wisdom is that a model's fairness and its accuracy are in tension with one another. However, there is a pernicious {\em modeling-evaluating dualism} bedeviling fair machine learning in which phenomena such as label bias are appropriately acknowledged as a source of unfairness when designing fair models, only to be tacitly abandoned when evaluating them. We investigate fairness and accuracy, but this time under a variety of controlled conditions in which we vary the amount and type of bias. We find, under reasonable assumptions, that the tension between fairness and accuracy is illusive, and vanishes as soon as we account for these phenomena during evaluation. Moreover, our results are consistent with an opposing conclusion: fairness and accuracy are sometimes in accord. This raises the question, {\em might there be a way to harness fairness to improve accuracy after all?} Since most notions of fairness are with respect to the model's predictions and not the ground truth labels, this provides an opportunity to see if we can improve accuracy by harnessing appropriate notions of fairness over large quantities of {\em unlabeled} data with techniques like posterior regularization and generalized expectation. Indeed, we find that semi-supervision not only improves fairness, but also accuracy and has advantages over existing in-processing methods that succumb to selection bias on the training set.