Despite a disagreement from R4, there is a consensus among most of knowledgeable reviewers that this is a good paper. After reading the paper, I also concur that the problem considered in this paper is important and the proposed solution is interesting, novel, and simple. Hence, I recommend that the paper is accepted as a poster. This paper considers the problem of learning with augmented class and unlabeled sample (aka open set recognition). That is, the authors assume that at test time a new class which is not available at training time can emerge. As a result, there is a distributional shift between training distribution and test distribution (i.e., non-i.i.d. setting). This itself is an important problem. The idea proposed in this work is to use labeled training data together with "unlabeled" data from the test distribution (with augmented class) to construct an unbiased estimate of the risk from which the classifier can be learned. Although the risk rewriting technique has been used extensively in several previous works, I find the idea of using unlabeled data to construct an unbiased estimate of the risk quite interesting. From the reviews, R1, R2, R3 also appear to support the acceptance of this paper. Nevertheless, R4 raised a major concern during the discussion regarding the use of test data during training. While this is a valid point and R4 seems to take his/her stance after thorough discussion, I do not think it can be used to justify a rejection. In fact, I believe that the misunderstanding may stem from an ambiguity in the presentation of how the unlabeled data is used in the training process. Hence, I would like to suggest that the authors improve the presentation of the camera-ready version and clarify the role of unlabeled data in this framework (especially in the experiment section).