The Generalisation Cost of RAMnets

Part of Advances in Neural Information Processing Systems 9 (NIPS 1996)

Bibtex Metadata Paper

Authors

Richard Rohwer, Michal Morciniec

Abstract

Given unlimited computational resources, it is best to use a crite(cid:173) rion of minimal expected generalisation error to select a model and determine its parameters. However, it may be worthwhile to sac(cid:173) rifice some generalisation performance for higher learning speed. A method for quantifying sub-optimality is set out here, so that this choice can be made intelligently. Furthermore, the method is applicable to a broad class of models, including the ultra-fast memory-based methods such as RAMnets. This brings the added benefit of providing, for the first time, the means to analyse the generalisation properties of such models in a Bayesian framework .