NIPS Proceedingsβ

Andrew G. Barto

17 Papers

  • Clustering via Dirichlet Process Mixture Models for Portable Skill Discovery (2011)
  • Constructing Skill Trees for Reinforcement Learning Agents from Demonstration Trajectories (2010)
  • Skill Discovery in Continuous Reinforcement Learning Domains using Skill Chaining (2009)
  • Skill Characterization Based on Betweenness (2008)
  • Intrinsically Motivated Reinforcement Learning (2004)
  • The Emergence of Multiple Movement Units in the Presence of Noise and Feedback Delay (2001)
  • Automated State Abstraction for Options using the U-Tree Algorithm (2000)
  • Learning Instance-Independent Value Functions to Enhance Local Search (1998)
  • Automated Aircraft Recovery via Reinforcement Learning: Initial Experiments (1997)
  • Local Bandit Approximation for Optimal Learning Problems (1996)
  • Reinforcement Learning for Mixed Open-loop and Closed-loop Control (1996)
  • Text-Based Information Retrieval Using Exponentiated Gradient Descent (1996)
  • A Predictive Switching Model of Cerebellar Movement Control (1995)
  • Improving Elevator Performance Using Reinforcement Learning (1995)
  • An Actor/Critic Algorithm that is Equivalent to Q-Learning (1994)
  • Convergence of Indirect Adaptive Asynchronous Value Iteration Algorithms (1993)
  • Robust Reinforcement Learning in Motion Planning (1993)