Part of Advances in Neural Information Processing Systems 4 (NIPS 1991)
Igor Grebert, David Stork, Ron Keesing, Steve Mims
We designed and trained a connectionist network to generate letterfonns in a new font given just a few exemplars from that font. During learning. our network constructed a distributed internal representation of fonts as well as letters. despite the fact that each training instance exemplified both a font and a letter. It was necessary to have separate but interconnected hidden units for " letter" and "font" representations - several alternative architectures were not successful.