Generalized Learning Vector Quantization

Part of Advances in Neural Information Processing Systems 8 (NIPS 1995)

Bibtex Metadata Paper

Authors

Atsushi Sato, Keiji Yamada

Abstract

We propose a new learning method, "Generalized Learning Vec(cid:173) tor Quantization (GLVQ)," in which reference vectors are updated based on the steepest descent method in order to minimize the cost function . The cost function is determined so that the obtained learning rule satisfies the convergence condition. We prove that Kohonen's rule as used in LVQ does not satisfy the convergence condition and thus degrades recognition ability. Experimental re(cid:173) sults for printed Chinese character recognition reveal that GLVQ is superior to LVQ in recognition ability.