Analog VLSI Implementation of Multi-dimensional Gradient Descent

Part of Advances in Neural Information Processing Systems 5 (NIPS 1992)

Bibtex Metadata Paper

Authors

David B. Kirk, Douglas Kerns, Kurt Fleischer, Alan Barr

Abstract

We describe an analog VLSI implementation of a multi-dimensional gradient estimation and descent technique for minimizing an on(cid:173) chip scalar function fO. The implementation uses noise injec(cid:173) tion and multiplicative correlation to estimate derivatives, as in [Anderson, Kerns 92]. One intended application of this technique is setting circuit parameters on-chip automatically, rather than manually [Kirk 91]. Gradient descent optimization may be used to adjust synapse weights for a backpropagation or other on-chip learning implementation. The approach combines the features of continuous multi-dimensional gradient descent and the potential for an annealing style of optimization. We present data measured from our analog VLSI implementation.