The primate visual system learns to recognize the true direction of pattern motion using local detectors only capable of detecting the component of motion perpendicular the orientation of the moving edge. A multilayer feedforward network model similar to Linsker's model was presented with input patterns each consisting of randomly oriented contours moving in a particular direction. Input layer units are granted component direction and speed tuning curves similar to those recorded from neurons in primate visual area VI that project to area MT. The network is trained on many such patterns until most weights saturate. A proportion of the units in the second layer solve the aperture problem (e.g., show the to gratings), same direction-tuning curve peak resembling pattern-direction selective neurons, which ftrst appear inareaMT.