This paper presents Rumi-GAN, which aims to improve GAN by incorporating negative samples. Rumi-GAN can be easily applied to another GAN-based framework, which only requires to divide training data into the positive and negative ones. The authors compared with ACGAN, LSGAN, and the modified Rumi-LSGAN on several datasets. Reviewers agreed on the novelty after rebuttal. Also, some remaining issues should be addressed in the final version, e.g., more comparison with SOTA cGANs and more details on the FID scores.