Radial Basis Function Networks and Complexity Regularization in Function Learning

Part of Advances in Neural Information Processing Systems 9 (NIPS 1996)

Bibtex Metadata Paper

Authors

Adam Krzyzak, Tamás Linder

Abstract

In this paper we apply the method of complexity regularization to de(cid:173) rive estimation bounds for nonlinear function estimation using a single hidden layer radial basis function network. Our approach differs from the previous complexity regularization neural network function learning schemes in that we operate with random covering numbers and 11 metric entropy, making it po~sibleto consider much broader families of activa(cid:173) tion functions, namely functions of bounded variation. Some constraints previously imposed on the network parameters are also eliminated this way. The network is trained by means of complexity regularization in(cid:173) volving empirical risk minimization. Bounds on the expected risk in tenns of the sample size are obtained for a large class of loss functions. Rates of convergence to the optimal loss are also derived.