Generalisation in Feedforward Networks

Part of Advances in Neural Information Processing Systems 7 (NIPS 1994)

Bibtex Metadata Paper

Authors

Adam Kowalczyk, Herman Ferrá

Abstract

We discuss a model of consistent learning with an additional re(cid:173) striction on the probability distribution of training samples, the target concept and hypothesis class. We show that the model pro(cid:173) vides a significant improvement on the upper bounds of sample complexity, i.e. the minimal number of random training samples allowing a selection of the hypothesis with a predefined accuracy and confidence. Further, we show that the model has the poten(cid:173) tial for providing a finite sample complexity even in the case of infinite VC-dimension as well as for a sample complexity below VC-dimension. This is achieved by linking sample complexity to an "average" number of implement able dichotomies of a training sample rather than the maximal size of a shattered sample, i.e. VC-dimension.