This paper proposes an novelty-search exploration method based on an encoding of the environment. Their method computes the novelty of a state in a learned representation embedding space and encourages the agent to optimize for this novelty using a combined model-free and model-based approach. Motivated by the information bottleneck principle, the embedding space is learned by maximizing compression while retaining an accurate dynamics model, resulting in compressing the environment into a small state space well-suited for novelty-based exploration. The experiments were also clear and well-motivated, on grid-type domains to evaluate state coverage, and also two control domains to evaluate the improvement of novelty search on the agent's ability to perform control tasks. I particularly enjoyed the learned abstract visualization of the labyrinth env in Figure 1. Reviewers gave the authors good feedback in terms of references, and suggestions to improve experiments and measurements that were partially addressed in the rebuttal. All in all, everyone agrees that this work should be accepted at NeurIPS and will be a fine contribution. The topic is relevant to wide range of NeurIPS audience and also can be presented nicely in a visual presentation.