On the Non-Existence of a Universal Learning Algorithm for Recurrent Neural Networks

Part of Advances in Neural Information Processing Systems 6 (NIPS 1993)

Bibtex Metadata Paper

Authors

Herbert Wiklicky

Abstract

We prove that the so called "loading problem" for (recurrent) neural net(cid:173) works is unsolvable. This extends several results which already demon(cid:173) strated that training and related design problems for neural networks are (at least) NP-complete. Our result also implies that it is impossible to find or to formulate a universal training algorithm, which for any neu(cid:173) ral network architecture could determine a correct set of weights. For the simple proof of this, we will just show that the loading problem is equivalent to "Hilbert's tenth problem" which is known to be unsolvable.

1 THE NEURAL NETWORK MODEL

It seems that there are relatively few commonly accepted general formal definitions of the notion of a "neural network". Although our results also hold if based on other formal definitions we will try to stay here very close to the original setting in which Judd's NP completeness result was given [Judd, 1990]. But in contrast to [Judd, 1990] we will deal here with simple recurrent networks instead of feed forward architectures.

Our networks are constructed from three different types of units: .E-units compute just the sum of all incoming signals; for II -units the activation (node) function is given by the product of the incoming signals; and with E)-units - depending if the input signal is smaller or larger than a certain threshold parameter fl - the output is zero or one. Our units are connected or linked by real weighted connections and operate synchronously.

Note that we could base our construction also just on one general type of units, namely what usually is called .E II -units. Furthermore, one could replace the II -units in the below