Universal Approximation and Learning of Trajectories Using Oscillators

Part of Advances in Neural Information Processing Systems 8 (NIPS 1995)

Bibtex Metadata Paper

Authors

Pierre Baldi, Kurt Hornik

Abstract

Natural and artificial neural circuits must be capable of travers(cid:173) ing specific state space trajectories. A natural approach to this problem is to learn the relevant trajectories from examples. Un(cid:173) fortunately, gradient descent learning of complex trajectories in amorphous networks is unsuccessful. We suggest a possible ap(cid:173) proach where trajectories are realized by combining simple oscil(cid:173) lators, in various modular ways. We contrast two regimes of fast and slow oscillations. In all cases, we show that banks of oscillators with bounded frequencies have universal approximation properties. Open questions are also discussed briefly.

1

INTRODUCTION: TRAJECTORY LEARNING

The design of artificial neural systems, in robotics applications and others, often leads to the problem of constructing a recurrent neural network capable of producing a particular trajectory, in the state space of its visible units. Throughout evolution, biological neural systems, such as central pattern generators, have also been faced with similar challenges. A natural approach to tackle this problem is to try to "learn" the desired trajectory, for instance through a process of trial and error and subsequent optimization. Unfortunately, gradient descent learning of complex trajectories in amorphous networks is unsuccessful. Here, we suggest a possible approach where trajectories are realized, in a modular and hierarchical fashion, by combining simple oscillators. In particular, we show that banks of oscillators have universal approximation properties.

452

P. BALDI, K. HORNIK

To begin with, we can restrict ourselves to the simple case of a network with one! visible linear unit and consider the problem of adjusting the network parameters in a way that the output unit activity u(t) is equal to a target function I(t), over an interval of time [0, T]. The hidden units of the network may be non-linear and satisfy, for instance, one of the usual neural network charging equations such as

dUi

dt = - Ti + L..JjWij/jUj(t - Tij),