{"title": "Processing of Visual and Auditory Space and Its Modification by Experience", "book": "Advances in Neural Information Processing Systems", "page_first": 1186, "page_last": 1187, "abstract": null, "full_text": "Processing of Visual and Auditory Space \n\nand Its Modification by Experience \n\nJosef P. Rauschecker \n\nLaboratory of Neurophysiology \nNational Institute of Mental Health \n\nPoolesville, MD 20837 \n\nTerrence J. Sejnowski \n\nComputational Neurobiology Lab \n\nThe Salk: Institute \n\nSan Diego, CA 92138 \n\nVisual  spatial  information is  projected  from  the  retina  to  the  brain  in  a \nhighly  topographic  fashion,  so  that  2-D  visual  space  is  represented  in  a \nsimple retinotopic map.  Auditory spatial information,  by  contrast, has  to \nbe computed from  binaural time and  intensity differences as well as from \nmonaural spectral cues produced by the head and ears. Evaluation of these \ncues in the  central nervous system leads to  the generation of neurons that \nare  sensitive to  the  location of a sound source in space  (\"spatial tuning\") \nand, in some animal species, to auditory space maps where spatial location \nis encoded as a 2-D map just like in the visual system. The brain structures \nthought to  be involved in the multimodal integration of visual and auditory \nspatial  integration  are  the  superior  colliculus  in  the  midbrain  and  the \ninferior parietal lobe in the cerebral cortex. \n\nIt has  been  suggested  for  the  owl  that the  visual  system  participates  in \nsetting  up  the  auditory  space  map  in  the  superior.  Rearing  owls  with \ndisplacing  prisms,  for example, shifts the map  by a fixed  amount. These \nbehavioral  and  neurobiological  findings  have  been  successfully \nincorporated  into  a  connectionist model  of the  owl's  sound  localization \nsystem  (Rosen,  Rumelhart, and  Knudsen,  1994).  On  the  other hand, cats \nthat  are  reared  with  both eyes  sutured  shut  develop  completely  normal \nauditory  spatial  mechanisms:  Precision  of sound  localization  is  even \nimproved above normal  (Rauschecker and  Kniepert,  1994),  and  a higher \nnumber of auditory neurons with sharper spatial tuning is found in parietal \ncortex  of such cats  (Rauschecker and  Korte,  1993).  Non-visual  sensory \nsignals and/or motor feedback must be capable, therefore, to calibrate the \nauditory  spatial  mechanisms.  Activity-dependent  Hebbian  learning  and \nsynaptic  competition between inputs to  the parietal region  from  different \nsensory modalities are sufficient to explain these results. \n\n1186 \n\n\fProcessing of Visual and Auditory Space and Its Modification by Experience \n\n1187 \n\nThe  question  remains  how  visual  and  auditory  information  are  kept  in \nspatial  register with each other when  the animal moves its eyes or head. \nExperiments  in  awake  behaving  monkeys  help  to  solve  this  problem. \nNeurons in  the  lateral intraparietal area of cortex  (LIP)  respond to  visual \nand  auditory  stimuli which  call for  a movement to  the  same  location  in \nspace.  Neuronal  responses  in  both  modalities  are  modulated  by  eye \nposition leading to \"gain fields\",  in which the location of a target in head(cid:173)\ncentered coordinates is encoded via the response strength in  a population \nof neurons (Andersen, Snyder, Li, and  Stricanne, 1993). \n\nThe  neurobiological  data  from  owls,  cats  and  monkeys  were  used  to \ndevelop  a neural  network model of multisensory integration  (Pouget and \nSejnowski,  1993).  A set of basis functions was introduced which replace \nthe  conventional  allocentric  representations  and  produce  gain  fields \nsimilar  to  monkey  parietal  cortex.  An  extension  of  the  model  also \nincorporates the  plasticity of this system.  Predictive Hebbian learning  is \nused  to  bring the visual and  auditory maps  into register.  In the network a \nHebb  rule  is  gated  by  a  reinforcement  term,  which  is  the  difference \nbetween actual reinforcement and how much reinforcement is expected by \nthe system. It utilizes the activity of diffuse transmitter projection systems, \nsuch  as  noradrenaline  (NA),  acetylcholine  (ACh),  and  dopamine  (DA) , \nwhich  are  known  to  play an  important role  for  plasticity in the  brain  of \nhigher mammals. \n\nIn  summary, it appears extremely fruitful to  bring together neuroscientists \nand  neural  network modelers,  because  both groups can profit from  each \nother.  Neurobiological data are the flesh for realistic network models, and \nmodels are helpful to formalize a biological hypothesis and guide the way \nfor further testing. \n\nAndersen  RA,  Snyder  LH,  Li  C-S,  Stricanne  B  (1993)  Coordinate \ntransformations  in  the  representation  of spatial  information.  Curr \nOpinion Neurobiol3: 171-176. \n\nPouget  A,  Fisher  SA,  Sejnowski  TJ  (1993)  Egocentric  spatial \n\nrepresentation in early vision. J Cog Neurosci 5:150-161. \n\nRauschecker  JP  and  Korte  M  (1993)  Auditory  compensation  for  early \n\nblindness in cat cerebral cortex. J Neurosci 13:4538-4548. \n\nRauschecker JP  and  Kniepert  U  (1994)  Enhanced  precision  of auditory \nlocalization behavior in  visually deprived cats.  Eur J Neurosci 6  (in \npress). \n\nRosen  D,  Rumelhart D,  Knudsen  E (1994)  A connectionist model of the \nowl's sound localization system. In: Advances in Neural Information \nProcessing  Systems  6,  Cowan J, Tesauro G,  Alspector J (eds),  San \nMateo, CA: Morgan Kaufmann (in press) \n\n\f", "award": [], "sourceid": 760, "authors": [{"given_name": "Josef", "family_name": "Rauschecker", "institution": null}, {"given_name": "Terrence", "family_name": "Sejnowski", "institution": null}]}