Neural System Model of Human Sound Localization

Part of Advances in Neural Information Processing Systems 12 (NIPS 1999)

Bibtex Metadata Paper

Authors

Craig Jin, Simon Carlile

Abstract

This paper examines the role of biological constraints in the human audi(cid:173) tory localization process. A psychophysical and neural system modeling approach was undertaken in which performance comparisons between competing models and a human subject explore the relevant biologi(cid:173) cally plausible "realism constraints". The directional acoustical cues, upon which sound localization is based, were derived from the human subject's head-related transfer functions (HRTFs). Sound stimuli were generated by convolving bandpass noise with the HRTFs and were pre(cid:173) sented to both the subject and the model. The input stimuli to the model was processed using the Auditory Image Model of cochlear processing. The cochlear data was then analyzed by a time-delay neural network which integrated temporal and spectral information to determine the spa(cid:173) tial location of the sound source. The combined cochlear model and neural network provided a system model of the sound localization pro(cid:173) cess. Human-like localization performance was qualitatively achieved for broadband and bandpass stimuli when the model architecture incor(cid:173) porated frequency division (or tonotopicity), and was trained using vari(cid:173) able bandwidth and center-frequency sounds.