Part of Advances in Neural Information Processing Systems 6 (NIPS 1993)
Dana Ron, Yoram Singer, Naftali Tishby
We propose a learning algorithm for a variable memory length Markov process. Human communication, whether given as text, handwriting, or speech, has multi characteristic time scales. On short scales it is characterized mostly by the dynamics that gen(cid:173) erate the process, whereas on large scales, more syntactic and se(cid:173) mantic information is carried. For that reason the conventionally used fixed memory Markov models cannot capture effectively the complexity of such structures. On the other hand using long mem(cid:173) ory models uniformly is not practical even for as short memory as four. The algorithm we propose is based on minimizing the sta(cid:173) tistical prediction error by extending the memory, or state length, adaptively, until the total prediction error is sufficiently small. We demonstrate the algorithm by learning the structure of natural En(cid:173) glish text and applying the learned model to the correction of cor(cid:173) rupted text . Using less than 3000 states the model's performance is far superior to that of fixed memory models with similar num(cid:173) ber of states. We also show how the algorithm can be applied to intergenic E. coli DNA base prediction with results comparable to HMM based methods.