A Study of Parallel Perturbative Gradient Descent

Part of Advances in Neural Information Processing Systems 7 (NIPS 1994)

Bibtex Metadata Paper


D. Lippe, Joshua Alspector


We have continued our study of a parallel perturbative learning method [Alspector et al., 1993] and implications for its implemen(cid:173) tation in analog VLSI. Our new results indicate that, in most cases, a single parallel perturbation (per pattern presentation) of the func(cid:173) tion parameters (weights in a neural network) is theoretically the best course. This is not true, however, for certain problems and may not generally be true when faced with issues of implemen(cid:173) tation such as limited precision. In these cases, multiple parallel perturbations may be best as indicated in our previous results.