Boosting the Performance of RBF Networks with Dynamic Decay Adjustment

Part of Advances in Neural Information Processing Systems 7 (NIPS 1994)

Bibtex Metadata Paper

Authors

Michael Berthold, Jay Diamond

Abstract

Radial Basis Function (RBF) Networks, also known as networks of locally-tuned processing units (see [6]) are well known for their ease of use. Most algorithms used to train these types of net(cid:173) works, however, require a fixed architecture, in which the number of units in the hidden layer must be determined before training starts. The RCE training algorithm, introduced by Reilly, Cooper and Elbaum (see [8]), and its probabilistic extension, the P-RCE algorithm, take advantage of a growing structure in which hidden units are only introduced when necessary. The nature of these al(cid:173) gorithms allows training to reach stability much faster than is the case for gradient-descent based methods. Unfortunately P-RCE networks do not adjust the standard deviation of their prototypes individually, using only one global value for this parameter. This paper introduces the Dynamic Decay Adjustment (DDA) al(cid:173) gorithm which utilizes the constructive nature of the P-RCE al(cid:173) gorithm together with independent adaptation of each prototype's decay factor. In addition, this radial adjustment is class dependent and distinguishes between different neighbours. It is shown that networks trained with the presented algorithm perform substan(cid:173) tially better than common RBF networks.

522

Michael R. Berthold, Jay Diamond