Assaf Zeevi, Ron Meir, Robert Adler
We consider the problem of prediction of stationary time series, using the architecture known as mixtures of experts (MEM). Here we suggest a mixture which blends several autoregressive models. This study focuses on some theoretical foundations of the predic(cid:173) tion problem in this context. More precisely, it is demonstrated that this model is a universal approximator, with respect to learn(cid:173) ing the unknown prediction function . This statement is strength(cid:173) ened as upper bounds on the mean squared error are established. Based on these results it is possible to compare the MEM to other families of models (e.g., neural networks and state dependent mod(cid:173) els). It is shown that a degenerate version of the MEM is in fact equivalent to a neural network, and the number of experts in the architecture plays a similar role to the number of hidden units in the latter model.