NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
This is a purely empirical study that considers a problem of co-generation in the context of deep unsupervised generative models. Given a part of the example is observed, one is required to fill in the remaining (unobserved) part in a reasonable way. The problem is well motivated by applications such as image in-painting. The authors provide an extensive overview of the existing literature. The proposed solution is simple and uses an already trained GAN generator $G: Z \to X$ to find latent vectors $z$ resulting in outputs $G(z)$ looking similar to the observed part of the image. The authors demonstrate empirically that a naive solution optimizing $z$ with a gradient descent in the latent space often gets stuck in local minima and does not work. The paper proposes to replace the gradient descent with another optimization technique based on annealed importance sampling, which is simple to implement. The resulting method is demonstrated to significantly outperform the naive baseline in the controlled toy scenario (Figs 1, 2, 3) as well as on more reasonable and challenging data sets (Figs 4, 5). The paper is very well polished, the presentation is clear and easy to follow. I would recommend acceptance, however I would ask the authors to address all of the feedback provided by the reviewers in the rebuttal (this should not be too difficult), especially the one by Revewer 1 regarding the train/test partitioning.