Shan Parfitt, Peter Tiño, Georg Dorffner
We introduce a novel method of constructing language models, which avoids some of the problems associated with recurrent neu(cid:173) ral networks. The method of creating a Prediction Fractal Machine (PFM)  is briefly described and some experiments are presented which demonstrate the suitability of PFMs for language modeling. PFMs distinguish reliably between minimal pairs, and their be(cid:173) havior is consistent with the hypothesis  that wellformedness is 'graded' not absolute. A discussion of their potential to offer fresh insights into language acquisition and processing follows.