LSTD with Random Projections

Part of Advances in Neural Information Processing Systems 23 (NIPS 2010)

Bibtex Metadata Paper

Authors

Mohammad Ghavamzadeh, Alessandro Lazaric, Odalric Maillard, Rémi Munos

Abstract

We consider the problem of reinforcement learning in high-dimensional spaces when the number of features is bigger than the number of samples. In particular, we study the least-squares temporal difference (LSTD) learning algorithm when a space of low dimension is generated with a random projection from a high-dimensional space. We provide a thorough theoretical analysis of the LSTD with random projections and derive performance bounds for the resulting algorithm. We also show how the error of LSTD with random projections is propagated through the iterations of a policy iteration algorithm and provide a performance bound for the resulting least-squares policy iteration (LSPI) algorithm.