NIPS Proceedings
β
Books
Bruno Scherrer
4 Papers
Approximate Dynamic Programming Finally Performs Well in the Game of Tetris
(2013)
Improved and Generalized Upper Bounds on the Complexity of Policy Iteration
(2013)
On the Use of Non-Stationary Policies for Stationary Infinite-Horizon Markov Decision Processes
(2012)
Biasing Approximate Dynamic Programming with a Lower Discount Factor
(2008)