NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:8832
Title:Robust exploration in linear quadratic reinforcement learning


		
The paper presents a new technique for robust optimization and balanced exploration in LQR problems. The technique is quite innovative since it leverages semidefinite programming instead of dynamic programming. This is an important algorithmic contribution with solid theory. For the empirical evaluation, the authors are expected to include the new experiments and running times mentioned in the rebuttal. Overall, this is very nice work.