NIPS Proceedingsβ

Particle Filter-based Policy Gradient in POMDPs

Part of: Advances in Neural Information Processing Systems 21 (NIPS 2008)

[PDF] [BibTeX] [Supplemental]



Our setting is a Partially Observable Markov Decision Process with continuous state, observation and action spaces. Decisions are based on a Particle Filter for estimating the belief state given past observations. We consider a policy gradient approach for parameterized policy optimization. For that purpose, we investigate sensitivity analysis of the performance measure with respect to the parameters of the policy, focusing on Finite Difference (FD) techniques. We show that the naive FD is subject to variance explosion because of the non-smoothness of the resampling procedure. We propose a more sophisticated FD method which overcomes this problem and establish its consistency.