NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center

### Reviewer 1

* Summary This work presents a framework for task delegability and empirically investigates through a survey people's preferences for delegating different tasks to AI. This work is both theoretical and empirical. The theoretical part concerns the framework, whereas the empirical part is an analysis of the survey responses, mostly through correlation analysis. * Originality To the best of my knowledge the presented framework for task delegability is novel (the authors state that the framework builds on prior ideas in literature). The authors state that this is the first empirical study on task delegability. The authors provide a quite well-structured related work section which appears to cover a diverse list of related topics. The authors also highlight some work related to the present work and point out differences. * Quality and Significance The technical quality of the theoretical part of the paper is quite good. I think that the delegability framework is presented clearly and motivations are also provided for the various components in the framework. i find that the the empirical part of the paper is technically OK. The methodology in the empirical evaluation is basically based on correlation analysis. Some things could be clarified in the empirical part. - Line 224: be more specific and say that the agreement is between individuals - Table 2: mention how the correlations are calculated, i.e., what you have aggregated over - Table 2: please do not omit components that are always non-significant, this is also interesting information. - Table 2: Are p-values corrected for multiple comparisons? In their concluding discussion, the authors point out some limitations regarding their work, and this is good. However, it would also be important for the authors to include some critical discussion concerning the effect size of the observed correlations (the correlations are quite low, even for the significant findings). In overall, I find that the theoretical part of the paper (introduction of the framework) is quite OK. The empirical part seems a bit simplistic given the nice dataset that the authors have collected and some kind of more detailed analysis could be warranted here. Some of the conclusions seem quite "self-evident", but this is Ok since the results are based on the empirical findings. The case-studies did not really convey that much extra information, I think. One must also ask how relevant and important these findings are, i.e., what is the potential significance. I am not sure that these results have that high practical relevance and hence the significance of this study might not be that high. I am also not sure that this paper is a good fit for NeurIPS, since the this study is more "sociologic" in nature than computer science (although the topic of study, naturally, is related to computer science). * Clarity The structure and language of this paper is OK and the ideas are clearly communicated. * Update after Author Feedback I have now read the author feedback. The authors' responses addressed some of the questions I had concerning the results. Also, I now think that this paper does fit the conference. I have hence increased my rating of this paper.