This work describes a simple approach to synthetically augment the training dataset for neural machine translation. The proposed approach involves training multiple forward and backward MT models and appending their outputs on the original training dataset to the training data. This augmented (or diversified) training dataset can then be used to train the next generation of models. The proposed approach is simple, achieves good results, and the authors do a good job presenting the idea. The paper is quite empirical and the technique fairly specific to NMT, but it is still interesting to see that sometimes simple ideas work well and are thus important / deserve careful consideration. A final request from the AC: it might be better to avoid the word 'elegant' in the title. It's better to let the reader decide whether something is elegant or not and instead stick to more objective adjectives.