NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:8591
Title:On Human-Aligned Risk Minimization


		
This paper is well written and well-motivated and makes the following interesting contributions: 1. proposing human-aligned risk measures suitable for ML by constructing the risk measures using cumulative prospect theory -- a novel and interesting idea 2. Establish a connection between the choice of risk estimator and properties (specifically fairness) of the learned predictor. However, based on reviewer feedback, there are also certain aspects/weaknesses that need to be addressed: 1. Multiple reviewers pointed out that it is not entirely cleary why fairness is expected to improve under human risk measure (proposed by this paper). The authors need to provide a clear justification for this. 2. It is also unclear how and why Cumulative Prospect Theory (CPT) matches human risk and why it should be applied to surrogate losses. This should be clearly justified in the paper. 2. Evaluation of this paper seems somewhat weak as pointed out by multiple reviewers: No comparison baselines to fairness problem, feature weights is not a meaningful metric for comparing models, the effects in figures seem quite small and noisy, evidence that CPT matches human utility is missing. We would strongly encourage the authors to address these issues. All in all, this is a borderline paper with some important and interesting contributions, but lacking in experimental evaluation and justification of certain assumptions.