Encoding Geometric Invariances in Higher-Order Neural Networks

Part of Neural Information Processing Systems 0 (NIPS 1987)

Bibtex Metadata Paper

Authors

C. Giles, R. Griffin, T. Maxwell

Abstract

We describe a method of constructing higher-order neural

networks that respond invariantly under geometric transformations on the input space. By requiring each unit to satisfy a set of constraints on the interconnection weights, a particular structure is imposed on the network. A network built using such an architecture maintains its invariant performance independent of the values the weights assume, of the learning rules used, and of the form of the nonlinearities in the network. The invariance exhibited by a first(cid:173) order network is usually of a trivial sort, e.g., responding only to the average input in the case of translation invariance, whereas higher-order networks can perform useful functions and still exhibit the invariance. We derive the weight constraints for translation, rotation, scale, and several combinations of these transformations, and report results of simulation studies.