NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:2886
Title:Learning to Predict Without Looking Ahead: World Models Without Forward Prediction


		
Interesting work that explores whether world model be learned without using a forward-predictive loss, and providing a novel perspective on model-based reinforcement learning. Introducing a method of 'observational dropout', the paper presents the first step towards demonstrating the feasibility of learning only the salient features needed for a task. The paper rebuttal has baseline comparisons to model based RL, which will be a valuable addition to the paper.