Combining Visual and Acoustic Speech Signals with a Neural Network Improves Intelligibility

Part of Advances in Neural Information Processing Systems 2 (NIPS 1989)

Bibtex Metadata Paper

Authors

Terrence J. Sejnowski, Ben Yuhas, Moise Goldstein, Robert Jenkins

Abstract

R.E. Jenkins The Applied Physics Laboratory The Johns Hopkins University Laurel, MD 20707

Acoustic speech recognition degrades in the presence of noise. Com(cid:173) pensatory information is available from the visual speech signals around the speaker's mouth. Previous attempts at using these visual speech signals to improve automatic speech recognition sys(cid:173) tems have combined the acoustic and visual speech information at a symbolic level using heuristic rules. In this paper, we demonstrate an alternative approach to fusing the visual and acoustic speech information by training feedforward neural networks to map the visual signal onto the corresponding short-term spectral amplitude envelope (STSAE) of the acoustic signal. This information can be directly combined with the degraded acoustic STSAE. Signif(cid:173) icant improvements are demonstrated in vowel recognition from noise-degraded acoustic signals. These results are compared to the performance of humans, as well as other pattern matching and es(cid:173) timation algorithms.