MAP Inference for Bayesian Inverse Reinforcement Learning

Part of Advances in Neural Information Processing Systems 24 (NIPS 2011)

Bibtex Metadata Paper Supplemental

Authors

Jaedeug Choi, Kee-eung Kim

Abstract

The difficulty in inverse reinforcement learning (IRL) arises in choosing the best reward function since there are typically an infinite number of reward functions that yield the given behaviour data as optimal. Using a Bayesian framework, we address this challenge by using the maximum a posteriori (MAP) estimation for the reward function, and show that most of the previous IRL algorithms can be modeled into our framework. We also present a gradient method for the MAP estimation based on the (sub)differentiability of the posterior distribution. We show the effectiveness of our approach by comparing the performance of the proposed method to those of the previous algorithms.