Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
1) What is the relationship of pseudo-ensemble to some classical ensemble methods, e.g., bagging or boosting? 2) For the semi-supervised experiment, the proposal needed to be compared with state-of-the-art methods. 3) It would be better to do experiments on more datasets because MNIST is a very simple dataset.
The originality of this paper is obvious, mainly due to the construction of the classifier. The paper is written well and has a significant contribution in the literature of SSL.
The main contribution of this paper is in setting up a 3 player game for semi-supervised learning where the generator tries to maximize the margin of the examples it generates in competition with a classifier + the traditional GAN approach of fooling a discriminator. This idea is novel to my knowledge. One small reservation I have with this method is that as the quality of the GAN and generated images increases the margin maximization for the classifier for generated examples becomes counter productive (as acknowledged by the authors) which requires careful early stopping. But this is standard practice with GANs and it should not be held against this paper. The paper is generally of high quality and significance but these could be improved by a broader treatment of related works. For instance, MixUp (Zhang et al, ICLR 2018) and CutMix methods (Yun et al, 2019) both generate artificial examples by combining real images with simple schemes to improve supervised learning performance. MixMatch (Berthelot et al, 2019) takes these ideas to the semi-supervised learning domain which forms an alternative to the GAN approach worthy of comparing to experimentally or at least in related work.