A Practice Strategy for Robot Learning Control

Part of Advances in Neural Information Processing Systems 5 (NIPS 1992)

Bibtex Metadata Paper

Authors

Terence Sanger

Abstract

"Trajectory Extension Learning" is a new technique for Learning Control in Robots which assumes that there exists some parameter of the desired trajectory that can be smoothly varied from a region of easy solvability of the dynamics to a region of desired behavior which may have more difficult dynamics. By gradually varying the parameter, practice movements remain near the desired path while a Neural Network learns to approximate the inverse dynamics. For example, the average speed of motion might be varied, and the in(cid:173) verse dynamics can be "bootstrapped" from slow movements with simpler dynamics to fast movements. This provides an example of the more general concept of a "Practice Strategy" in which a se(cid:173) quence of intermediate tasks is used to simplify learning a complex task. I show an example of the application of this idea to a real 2-joint direct drive robot arm.