Relaxation Networks for Large Supervised Learning Problems

Part of Advances in Neural Information Processing Systems 3 (NIPS 1990)

Bibtex Metadata Paper


Joshua Alspector, Robert Allen, Anthony Jayakumar, Torsten Zeppenfeld, Ronny Meir


Feedback connections are required so that the teacher signal on the output neurons can modify weights during supervised learning. Relaxation methods are needed for learning static patterns with full-time feedback connections. Feedback network learning techniques have not achieved wide popularity because of the still greater computational efficiency of back-propagation. We show by simulation that relaxation networks of the kind we are implementing in VLSI are capable of learning large problems just like back-propagation networks. A microchip incorporates deterministic mean-field theory learning as well as stochastic Boltzmann learning. A multiple-chip electronic system implementing these networks will make high-speed parallel learning in them feasible in the future.