Part of Advances in Neural Information Processing Systems 1 (NIPS 1988)
Yann Le Cun, Conrad Galland, Geoffrey E. Hinton
Learning procedures that measure how random perturbations of unit ac(cid:173) tivities correlate with changes in reinforcement are inefficient but simple to implement in hardware. Procedures like back-propagation (Rumelhart, Hinton and Williams, 1986) which compute how changes in activities af(cid:173) fect the output error are much more efficient, but require more complex hardware. GEMINI is a hybrid procedure for multilayer networks, which shares many of the implementation advantages of correlational reinforce(cid:173) ment procedures but is more efficient. GEMINI injects noise only at the first hidden layer and measures the resultant effect on the output error. A linear network associated with each hidden layer iteratively inverts the matrix which relates the noise to the error change, thereby obtaining the error-derivatives. No back-propagation is involved, thus allowing un(cid:173) known non-linearities in the system. Two simulations demonstrate the effectiveness of GEMINI.