Learning Attractor Landscapes for Learning Motor Primitives

Part of Advances in Neural Information Processing Systems 15 (NIPS 2002)

Bibtex Metadata Paper

Authors

Auke Ijspeert, Jun Nakanishi, Stefan Schaal

Abstract

Many control problems take place in continuous state-action spaces, e.g., as in manipulator robotics, where the control objective is of- ten deflned as flnding a desired trajectory that reaches a particular goal state. While reinforcement learning ofiers a theoretical frame- work to learn such control policies from scratch, its applicability to higher dimensional continuous state-action spaces remains rather limited to date. Instead of learning from scratch, in this paper we suggest to learn a desired complex control policy by transforming an existing simple canonical control policy. For this purpose, we represent canonical policies in terms of difierential equations with well-deflned attractor properties. By nonlinearly transforming the canonical attractor dynamics using techniques from nonparametric regression, almost arbitrary new nonlinear policies can be gener- ated without losing the stability properties of the canonical sys- tem. We demonstrate our techniques in the context of learning a set of movement skills for a humanoid robot from demonstrations of a human teacher. Policies are acquired rapidly, and, due to the properties of well formulated difierential equations, can be re-used and modifled on-line under dynamic changes of the environment. The linear parameterization of nonparametric regression moreover lends itself to recognize and classify previously learned movement skills. Evaluations in simulations and on an actual 30 degree-of- freedom humanoid robot exemplify the feasibility and robustness of our approach.