NIPS Proceedings
β
Books
Doina Precup
16 Papers
Basis refinement strategies for linear value function approximation in MDPs
(2015)
Data Generation as Sequential Decision Making
(2015)
Learning with Pseudo-Ensembles
(2014)
Optimizing Energy Production Using Policy Search and Predictive State Representations
(2014)
Bellman Error Based Feature Generation using Random Projections on Sparse Spaces
(2013)
Learning from Limited Demonstrations
(2013)
On-line Reinforcement Learning Using Incremental Kernel-Based Stochastic Factorization
(2012)
Value Pursuit Iteration
(2012)
Reinforcement Learning using Kernel-Based Stochastic Factorization
(2011)
Convergent Temporal-Difference Learning with Arbitrary Smooth Function Approximation
(2009)
Bounding Performance Loss in Approximate MDP Homomorphisms
(2008)
Off-policy Learning with Options and Recognizers
(2005)
A Convergent Form of Approximate Policy Iteration
(2002)
Improved Switching among Temporally Abstract Actions
(1998)
Learning to Schedule Straight-Line Code
(1997)
Multi-time Models for Temporally Abstract Planning
(1997)