NIPS Proceedingsβ

Learning convex bounds for linear quadratic control policy synthesis

Part of: Advances in Neural Information Processing Systems 31 (NIPS 2018) pre-proceedings

[PDF] [BibTeX] [Supplemental]


Conference Event Type: Poster


Learning to make decisions from observed data in dynamic environments remains a problem of fundamental importance in a numbers of fields, from artificial intelligence and robotics, to medicine and finance. This paper concerns the problem of learning control policies for unknown linear dynamical systems so as to maximize a quadratic reward function. We present a method to optimize the expected value of the reward over the posterior distribution of the unknown system parameters, given data. The algorithm involves sequential convex programing, and enjoys reliable local convergence and robust stability guarantees. Numerical simulations and stabilization of a real-world inverted pendulum are used to demonstrate the approach, with strong performance and robustness properties observed in both.