H. Wang, Bimal Mathur, Christof Koch
We demonstrate a multiscale adaptive network model of motion computation in primate area MT. The model consists of two stages: (l) local velocities are measured across multiple spatio-temporal channels, and (2) the optical flow field is computed by a network of direction(cid:173) selective neurons at multiple spatial resolutions. This model embeds the computational efficiency of Multigrid algorithms within a parallel network as well as adaptively computes the most reliable estimate of the flow field across different spatial scales. Our model neurons show the same nonclassical receptive field properties as Allman's type I MT neurons. Since local velocities are measured across multiple channels, various channels often provide conflicting measurements to the network. We have incorporated a veto scheme for conflict resolution. This mechanism provides a novel explanation for the spatial frequency dependency of the psychophysical phenomenon called Motion Capture.
1 MOTIVATION We previously developed a two-stage model of motion computation in the visual system of primates (Le. magnocellular pathway from retina to V1 and MT; Wang, Mathur & Koch, 1989). This algorithm has these deficiencies: (1) the issue of optimal spatial scale for velocity measurement, and (2) the issue optimal spatial scale for the smoothness of motion field. To address these deficiencies, we have implemented a multi-scale motion network based on multigrid algorithms. All methods of estimating optical flow make a basic assumption about the scale of the velocity relative to the spatial neighborhood and to the temporal discretization step of delay. Thus, if the velocity of the pattern is much larger than the ratio of the spatial to temporal sampling step, an incorrect velocity value will be obtained (Battiti, Amaldi & Koch, 1991). Battiti et al. proposed a coarse-to-fine strategy for adaptively detennining