A Note on the Representational Incompatibility of Function Approximation and Factored Dynamics

Part of Advances in Neural Information Processing Systems 15 (NIPS 2002)

Bibtex Metadata Paper


Eric Allender, Sanjeev Arora, Michael Kearns, Cristopher Moore, Alexander Russell


We establish a new hardness result that shows that the difficulty of plan- ning in factored Markov decision processes is representational rather than just computational. More precisely, we give a fixed family of fac- tored MDPs with linear rewards whose optimal policies and value func- tions simply cannot be represented succinctly in any standard parametric form. Previous hardness results indicated that computing good policies from the MDP parameters was difficult, but left open the possibility of succinct function approximation for any fixed factored MDP. Our result applies even to policies which yield a polynomially poor approximation to the optimal value, and highlights interesting connectionswith the com- plexity class of Arthur-Merlin games.