Direct value-approximation for factored MDPs

Part of Advances in Neural Information Processing Systems 14 (NIPS 2001)

Bibtex Metadata Paper

Authors

Dale Schuurmans, Relu Patrascu

Abstract

We present a simple approach for computing reasonable policies for factored Markov decision processes (MDPs), when the opti(cid:173) mal value function can be approximated by a compact linear form. Our method is based on solving a single linear program that ap(cid:173) proximates the best linear fit to the optimal value function. By applying an efficient constraint generation procedure we obtain an iterative solution method that tackles concise linear programs. This direct linear programming approach experimentally yields a signif(cid:173) icant reduction in computation time over approximate value- and policy-iteration methods (sometimes reducing several hours to a few seconds). However, the quality of the solutions produced by linear programming is weaker-usually about twice the approxi(cid:173) mation error for the same approximating class. Nevertheless, the speed advantage allows one to use larger approximation classes to achieve similar error in reasonable time.