NeurIPS 2020
### Recursive Inference for Variational Autoencoders

### Meta Review

The paper introduces RMIM -- an amortized version of boosted VI (BVI), based on a slightly different objective, which uses a recursively parameterized mixture as the variational posterior.
Improving inference in VAEs is an area of wide interest and potentially high impact. The reviewers thought the paper was mostly well written and the approach sensible. The experimental results are encouraging, with RMIM outperforming quite a few baselines, though there were some potential issues with the experimental setup as explained below.
The paper has substantial room for improvement however, with the reviewers making several good suggestions. For example, the derivation of the objective needs to be made clearer, including an explanation of how exactly it differs from the derivation in the BVI paper. It would also be good to explore the effect of the parameter C that determines how different the components of the mixture, as it seems crucial to the method.
The use of a Gaussian likelihood without dequantization on (at least partially) binary (MNIST and Omniglot) or quantized data (SVHN & CelebA) means that the results cannot be compared to those in the literature and the correctness of baselines cannot be verified. Moreover, the quality of the posterior approximation will be considerably more important when using a Gaussian likelihood (without dequantization) relative to a less sensitive likelihood such as Bernoulli. As a result, the relative performance of the proposed method might be considerably less impressive in a more careful experimental setup. The results on binary MNIST with a Bernoulli likelihood reported in the rebuttal are consistent with this theory. The authors are urged to rerun the experiments on all the datasets using more appropriate likelihoods and include the results in the final version of the paper.