{"title": "A Dynamical Model of Context Dependencies for the Vestibulo-Ocular Reflex", "book": "Advances in Neural Information Processing Systems", "page_first": 89, "page_last": 95, "abstract": "", "full_text": "A Dynamical Model of Context Dependencies for the \n\nVestibulo-Ocular Reflex \n\nOlivier J.M.D. Coenen* \n\nTerrence J. Sejnowskit \n\nComputational Neurobiology Laboratory \n\nHoward Hughes Medical Institute \n\nThe Salk Institute for Biological Studies \n\n10010 North Torrey Pines Road \n\nLa Jolla, CA 92037, U.S.A. \n\nDepartments oftBiology and *tPhysics \n\nUniversity of California, San Diego \n\nLa Jolla, CA 92093, U.S.A \n\n{olivier,terry}@salk.edu \n\nAbstract \n\nThe vestibulo-ocular reflex  (VOR)  stabilizes  images  on  the  retina during  rapid \nhead motions.  The gain of the VOR (the ratio of eye to head rotation velocity) \nis  typically around -1  when the eyes are focused on a distant target.  However, to \nstabilize images accurately, the VOR gain must vary  with context (eye position, \neye vergence and head  translation).  We  first  describe  a  kinematic  model of the \nVOR which relies solely on sensory information available from the semicircular \ncanals (head rotation), the otoliths (head translation), and neural correlates of eye \nposition and vergence angle.  We then propose a dynamical model and compare it \nto the eye velocity responses measured in monkeys. The dynamical model repro(cid:173)\nduces the observed amplitude and time course of the modulation of the VOR and \nsuggests one way to combine the required neural signals within the cerebellum and \nthe brain stem.  It also makes predictions for the responses of neurons to multiple \ninputs (head rotation and translation, eye position, etc.) in the oculomotor system. \n\n1  Introduction \nThe VOR stabilizes images on the retina during rapid head motions:  Rotations and translations of \nthe head in three dimensions must be compensated by appropriate rotations of the eye.  Because the \nhead's rotation axis is not the same as the eye's rotation axis, the calculations for proper image stabi(cid:173)\nlization of an object must take into account diverse variables such as object distance from each eye, \n\n\f90 \n\nO. J.  M. D. COENEN, T. J.  SEJNOWSKI \n\ngaze direction, and head translation (Viire et al., 1986). The stabilization is achieved by integrating \ninfonnation from different sources:  head rotations from the semicircular canals of the inner ear, head \ntranslations from the otolith organs, eye positions, viewing distance, as well as other context infonna(cid:173)\ntion, such as posture (head tilts) or activity (walking, running) (Snyder and King,  1992; Shelhamer \net al.,1992; Grossman et al.,  1989).  In  this paper we concentrate on the context modulation of the \nVOR which can  be described by  the kinematics of the reflex,  i.e.  eye position,  eye vergence and \nhead translation. \n\n2  The Vestibulo-Ocular Reflex:  Kinematic Model \n\nDefinition of Vectors \n\nCoordinate System \n\nEye position \nVector \n\nTarget Object \n\nGaze Vector \n\nGaze Angle \n\nInterocular \nDistance \n\nRotation Axis \n\nSemicircular \nCanals  and \nOtoliths \n\nHead \nTop View \n\n\u00a5 ~_--+_ Origin of coordinate \n\nsyste,,-,  (arbitrary) \n\nFigure  1:  Diagram showing the definition of the  vectors  used  in  the equation of the kinematic model of the \nvestibulo-ocular reflex. \n\nThe ideal VOR response is a compensatory eye movement which keeps the image fixed on the retina \nfor any head rotations and translations. We therefore derived an equation for the eye rotation velocity \nby requiring that a target remains stationary on the retina. The velocity of the resulting compensatory \neye rotation can be written as (see fig.  1): \n\nw =  -Oe + 1:1  x  [Dej  x Oe - To;] \n\n(1) \n\nwhere Oe  is the head rotation velocity sensed by the semicircular canals, TOj  is the head translation \nvelocity sensed by the otoliths, Dej  ==  (e - OJ), e is a constant vector specifying the location of an \neye in the head, OJ  is the position of either the left or right otolith, fJ  and Igl  are the unit vector and \namplitude of the gaze vector:  fJ  gives the eye position (orientation of the eye relative to the head), \nand Igl  gives the distance from the eye to the object, and the symbol  x  indicates the cross-product \nbetween two vectors. wand Oe are rotation vectors which describe the instantaneous angUlar velocity \nof the eye and head,  respectively.  A rotation  vector lies  along  the instantaneous axis  of rotation; \nits magnitude indicates the speed of rotation around the axis, and its direction is given by the right(cid:173)\nhand screw rule.  A  motion of the head combining rotation (0) and translation (T) is sensed as the \ncombination of a rotation velocity Oe  measured by the semicircular canals and a translation velocity \nTo sensed by the otoliths.  The rotation vectors are equal (0 = Oe), and the translation velocity vector \nas measured by the otoliths is given by: TOj  = OOj  x  0 + T,  where OOj  ==  (a - OJ),  and a is  the \nposition vector of the axis of rotation. \n\n\fA  Dynarnical Model of Context Dependencies for the Vestibula-Ocular Reflex \n\n91 \n\nThe special case where the gaze is horizontal and the rotation vector is vertical (horizontal head ro(cid:173)\ntation) has been studied extensively in  the literature.  We  used this special case in the sirnulations. \nIn that case w rnay be sirnplify by writing its equation with dot products.  Since 9 and slc  are then \nperpendicular (9 . fie  = 0). the first term of the following expression in brackets is zero: \n\n(2) \n\nThe sernicircular canals decornpose and report acceleration and velocity  of head rotation fi  by  its \ncornponents along the three canals on each side of the head fie  : horizontal. anterior and posterior. \nThe two otolith organs on each side report the dynamical inertial forces generated during linear rno(cid:173)\ntion (translation) in two perpendicular plane. one vertical and the other horizontal relative to the head. \nHere we assurne that a translation velocity signal (To) derived frorn or reported by the otolith affer(cid:173)\nents is available. The otoliths encode as well the head orientation relative to the gravity vector force. \nbut was not included in this study. \n\nTo cornplete the correspondence between the equation and a neural correlate. we need to determine \na physiological source for 9 and  I!I. The eye position 9 is assurned to be given by the output of the \nvelocity-to-position transformation or so-called \"neural integrator\" which provides eye position in(cid:173)\nformation and which is necessary for the activation of the rnotoneuron to sustain the eye in a fixed \nposition.  The integrator for horizontal eye position appears to be located in the nucleus prepositus \nhypoglossi in the pons. and the vertical integrator in the rnidbrain interstitial nucleus of Cajal.  (Craw(cid:173)\nford. Cadera and Vilis. 1991; Cannon and Robinson. 1987). We assurne that the eye position is given \nas the coordinates of the unit vector 9 along the ~ and 1; of fig.  1.  The eye position depends on the \n\neye velocity according to '* =  9 x  w.  For the special case w(t)  =  w(t)z. i.e.  for horizontal head \n\nrotation. the eye position coordinates are given by: \n\n91 (0) + f~ iJ2( r )w( r) dr \n91 (t) = \n92(t) =  92(0) - f~ 91(r)w(r)dr \n\n(3) \n\nThis is a set of two negatively coupled integrators.  The \"neural integrator\" therefore does not inte(cid:173)\ngrate the eye velocity directly but a product of eye position and eye velocity. The distance frorn eye \nto target I!I  can be written using the gaze angles in the horizontal plane of the head: \n\nRight eye: \n\nLeft eye: \n\n1 \n19RT \n\n1 \n19LT \n\n(4) \n\n(5) \n\nwhere \u00ab() R -\n() L) is the vergence angle. and I  is the interocular distance; the angles are rneasured frorn \na straight ahead gaze. and take on negative values when the eyes are turned towards the right. Within \nthe oculornotor systern.  the vergence angle and speed are encoded by the rnesencephalic reticular \nformation neurons (Judge and Curnrning. 1986; Mays. 1984). The nucleus reticularis tegrnenti pontis \nwith reciprocal connections to the flocculus. oculornotor vermis. paravermis of the cerebellurn also \ncontains neurons which activity varies linearly with vergence angle (Gamlin and Clarke. 1995). \n\nWe conclude that it is possible to perform the cornputations needed to obtain an ideal VOR with sig(cid:173)\nnals known to be available physiologically. \n\n\f92 \n\nO. J. M.  D. COENEN, T. J.  SEJNOWSKI \n\nDynamical Model Overview \n\nNod_ \nPftpoIItao \nIIyposIoooI \n\nFigure 2:  Anatomical connections considered in the dynamical  model.  Only  the left side is shown, the right \nside is identical and connected to the left side only for the calculation of vergence angle.  The nucleus prepositus \nhypoglossi  and the nucleus reticularis tegmenti pontis are meant to be representative of a class of nuclei in the \nbrain stem carrying eye position or vergence signal.  All connections are known to exist except the connection \nbetween the prepositus nucleus to the reticularis nucleus which has not been verified. Details of the cerebellum \nare in fig.  3 and of the vestibular nucleus in fig. 4. \n\n3  Dynamical Model \n\nSnyder &  King (1992) studied the effect of viewing distance and location of the axis of rotation on \nthe VOR in monkeys; their main results are reproduced in  fig.  5.  In an  attempt to reproduce their \ndata and to understand how the signals that we have described in section 2 may be combined in time, \nwe constructed a dynamical model based on the kinematic model.  Its basic anatomical structure is \nshown in fig.  2.  Details of the model are shown in fig.  3, and fig. 4  where all constants are written \nusing a millisecond time scale.  The results are presented in fig. 5. The dynamical variables represent \nthe change of average firing rate from resting level of activity. The firing rate of the afferents has a \ntonic component proportional to the velocity and a phasic component proportional to the acceleration \nof movement.  Physiologically, the afferents have a wide range of phasic and tonic amplitudes. This \nis reflected by a wide selection of parameters in the numerators in the boxes of fig. 3 and fig. 4. The \nLaplace transform of the integration operator in  equation (3) of the eye position coordinates is  ~. \nFollowing Robinson (1981),  we modeled the neural  integrator with a  gain and a  time constant of \n20 seconds.  We  therefore replaced the pure integrator  ~ with  20~~~~1 in  the calculations of eye \nposition. The term 1 in fig.  3 is calculated by using equations (4) and (5), and by using the integrator \n20~o:!~1 on the eye velocity motor command to find the angles (h  and (JR. \n\n9 \n\nThe dynamical model is based on the assumption that the cerebellum is required for context modula(cid:173)\ntion, and that because of its architecture, the cerebellum is more likely to implement complex func(cid:173)\ntions of multiple signals than other relevant nuclei.  The major contributions of vergence and eye \nposition modulation on the VOR are therefore mediated by the cerebellum.  Smaller and more tran(cid:173)\nsient contributions from eye position are assumed to be mediated through the vestibular nucleus as \nshown in fig. 4. The motivation for combining eye position as in fig. 4 are, first, the evidence for eye \nresponse oscillations; second, the theoretical consideration that linear movement information (To) is \nuseless without eye position information for proper VOR. \n\nThe parameters in the dynamical model were adjusted by hand after observing the behavior of the dif(cid:173)\nferent components of the model and noting how these combine to produce the oscillations observed \n\n\fA  Dynamical Model of Context Dependencies for  the Vestibulo-Ocular Reflex \n\n93 \n\nVestibular \nSemicirtular \n\n0Igan \n\nc..l O -- - - - t  401+1 r-----\u00ae--f--..j \nx OIolith \n\n300+1 \n\nCerebellum \n\nVHlibabr \nNuc1tul \n\nFigure 3: Contribution of the cerebellum to the dynamical model.  Filtered velocity inputs from  the canals and \notoliths are combined with eye position according to equation (2).  These calculations could be performed either \noutside the cerebellum in one or multiple brain stem nuclei  (as shown) or possibly inside the cerebellum.  The \nonly output is to the vestibular nucleus. The Laplace notation is used in each boxes to represent a leaky integrator \nwith  a time constant. input derivative and input gain.  The term oe are the coordinates of the vector oe  shown \nin fig.  1.  The  x  indicates a multiplication.  The term! multiplies each inputs individually.  The open arrows \nindicate inhibitory (negative) connections. \n\nCere ... lIum \n\nVHlibalu \nSemicimtlu \n\nc.w \nO--'----t~l---+--\u00ae----t~~ \n\nX \n\nFigure 4:  Contribution of the vestibular nucleus to the dynamical model.  Three pathways in the vestibular nu(cid:173)\ncleus process the canal and otolith inputs to drive the eye. The first pathway is modulated by the output of the \ncerebellum through a FIN (Flocculus Target Neuron). The second and third pathways report transient informa(cid:173)\ntion from the inputs which are combined with eye position in a manner identical to fig. 3. The location of these \ncalculations is hypothetical. \n\nin the data.  Even though the number of parameters in the model is  not small. it was not possible to \nfit  any single response in fig.  5  without affecting most of the other eye responses.  This puts severe \nlimits on the set of parameters allowed in the model. \n\nThe dynamical model suggests that the oscillations present in the data reflect:  1) important accelera(cid:173)\ntion components in the neural signals. both rotational and linear, 2) different time delays between the \ncanal and otolith signal processing. and 3) antagonistic or synergistic action of the canal and otolith \nsignals with different axes of rotation, as described by the two terms in the bracket of equation (2). \n\n4  Discussion \n\nBy fitting  the dynamical model  to the data,  we tested the hypothesis that the VOR has a  response \nclose to ideal taking into account the time constraints imposed by the sensory inputs and the neural \nnetworks performing the computations. The vector computations that we used in the model may not \n\n\f94 \n\nO. J.  M.  D. COENEN, T. J.  SEJNOWSKI \n\nDynamical Model Responses  vs  Experimental Data \n\n80 \n\nLOMtIOftof \n.... 01 rotMIon \n-,a.-om \n\n~ w \n\n-20 \n\nT .......... ~ \n\n.-\n\n80 \n\n60 \n\n40 \n\n20 \n\n-20 \n\n-400~----~5~0------~10=0~ \n\n-40oL-----~5~0 ----~1~0~0-\u00ad\n\nTime (m.) \n\nTime (m.) \n\nFigure 5:  Comparison between the dynamical  model  and  monkey  data.  The dotted lines show  the effect of \nviewing distance and location of the axis of rotation on the VOR as recorded by  Snyder &  King (1992)  from \nmonkeys in the dark. The average eye velocity response (of left and right eye) to a sudden change in head  ve(cid:173)\nlocity is shown for different target distances (left) and rotational axes (right). On the left, the location of the axis \nof rotation was in the midsagittal plane 12.5 cm behind the eyes (-12.5 cm), and the target distance was varied \nbetween 220 cm and 9 cm. On the right, the target di stance was kept constant at 9 cm in front of the eye, and the \nlocation of the axis of rotation was varied from  14 cm behind t04cm in front of the eyes (-14cm to 4cm) in the \nmidsagittal plane.  The solid lines show the model responses.  The model replicates many characteristics of the \ndata.  On the left the model captures the eye velocity fluctuations between 20-50 ms, followed by a decrease and \nan increase which are both modulated with target distance (50-80 ms).  The later phase of the response (80-100 \nms) is almost exact for 220 cm, and one peak is seen at the appropriate location for the other distances.  On the \nright the closest fits were obtained for the 4 cm and 0 cm locations.  The mean values are in good agreement and \nthe waveforms are close, but could be shifted in time for the other locations of the axis of rotations.  Finally, the \nlatest peak ( .....  lOOms) in the data appears in the model for -14 cm and 9 cm location. \n\nbe the representation used in  the oculomotor system.  Mathematically, the vector representation is \nonly one way to describe the computations involved. Other representations exist such as the quater(cid:173)\nnion representation which has been studied in  the context of the saccadic system (Tweed and Vilis, \n1987; see also Handzel and Flash,  1996 for a  very  general representation).  Detailed comparisons \nbetween the model and recordings from neurons will be require to settle this issue. \n\nDirect comparison between Purkinje cell recordings (L.H. Snyder &  W.M.  King, unpublished data) \nand predictions of the model could be used to determine more precisely the different inputs to some \nPurkinje cells.  The model can therefore be an important tool to gain insights difficult to obtain di(cid:173)\nrectly with experiments. \n\nThe question of how the central nervous system learns the transformations that we described still \nremains.  The cerebellum may be one site of learning for these transformations, and its output may \nmodulate the VOR in real time depending on the context.  This view is  compatible with the results \nof Angelaki and Hess (1995) which indicate that the cerebellum is required to correctly perform an \notolith transformation. It is also consistent with adaptation results in the VOR. To test this hypothesis, \nwe have been working on a  model of the cerebellum which learns to anticipate sensory inputs and \nfeedbacks, and use these signals to modulate the VOR. The learning in the cerebellum and vestibular \nnuclei is mediated by the climbing fibers which report a reinforcement signal of the prediction error \n(Coenen and Sejnowski. in preparation). \n\n\fA Dynamical Model of Context Dependencies for  the Vestibulo-Ocular Reflex \n\n95 \n\n5  Conclusion \n\nMost research on the VOR has assumed forward gaze focussed  at infinity.  The kinematics of off(cid:173)\ncenter gaze and fixation at finite distance necessitates nonlinear corrections that require the integra(cid:173)\ntion of a variety of sensory inputs.  The dynamical model studied here is  a working hypothesis for \nhow these corrections could be computed and is generally consistent with what is known about the \ncerebellum and brain stem nuclei.  We  are, however, far from knowing the mechanisms underlying \nthese computations, or how they are learned through experience. \n\n6  Acknowledgments \n\nThe first author was supported by a McDonnell-Pew Graduate Fellowship during this research.  We \nwould like to thank Paul Viola for helpful discussions. \n\nReferences \n\nAngelaki, D. E. and Hess, B. J. (1995).  Inertial representation of angular motion in the vestibular system of rhe(cid:173)\n\nsus monkeyus. II. Otolith-controlled transformation that depends on an intact cerebellar nodulus.  Journal \nof Neurophysiology, 73(5): 1729-1751. \n\nCannon, S. C.  and Robinson, D.  A.  (1987).  Loss of the neural integrator of the oculomotor system from brain \n\nstem lesions in monkey.  Journal of Neurophysiology, 57(5):1383-1409. \n\nCrawford, J.  D., Cadera, W., and Vilis, T.  (1991).  Generation of torsional and vertical eye position signals by \n\nthe interstitial nucleus of Cajal.  Science, 252:1551-1553. \n\nGamlin, P. D. R. and Clarke, R. J. (1995).  Single-unit activity in the primate nucleus reticularis tegmenti pontis \n\nrelated to vergence and ocular accomodation.  Journal of Neurophysiology, 73(5):2115-2119. \n\nGrossman, G. E., Leigh, R. J., Bruce, E. N., Huebner, W. P.,and Lanska, D.J. (1989).  Performanceofthe human \n\nvestibu1oocu1ar reflex during locomotion.  Journal of Neurophysiology, 62(1 ):264-272. \n\nHandzel, A. A. and Flash, T. (1996).  The geometry of eye rotations and listing's law.  In Touretzky, D., Mozer, \nM., and Hasselmo, M., editors, Advances in Neural Information Processing Systems 8, Cambridge, MA. \nMIT Press. \n\nJudge, S. J. and Cumming, B. G. (1986).  Neurons in the monkey midbrain with activity related to vergence eye \n\nmovement and accomodation.  Journal of Neurophysiology, 55:915-930. \n\nMays, L. E. (1984).  Neural control of vergence eye movements:  Convergence and divergence neurons in mid(cid:173)\n\nbrain.  Journal of Neurophysiology, 51:1091-1108. \n\nRobinson, D.  A.  (1981).  The use of control systems analysis in the neurophysiology of eye movements.  Ann. \n\nRev.  Neurosci., 4:463-503. \n\nShelhamer, M., Robinson, D. A., and Tan, H. S. (1992).  Context-specific adaptation of the gain of the vestibulo(cid:173)\n\nocular reflex in humans.  Journal of Vestibular Research, 2:89-96. \n\nSnyder, L. H. and King, W. M. (1992).  Effect of viewing distance and location ofthe axis of head rotation on the \nmonkey's vestibuloocular reflex I. eye movement response.  Journal of Neurophysiology, 67(4):861-874. \n\nTweed, D. and Vilis, T. (1987).  Implications of rotational kinematics for the oculomotor system in three dimen(cid:173)\n\nsions.  Journal of Neurophysiology, 58(4):832-849. \n\nViire, E., Tweed, D., Milner, K., and Vilis, T.  (1986).  A reexamination of the gain ofthe vestibuloocular reflex. \n\nJournal of Neurophysiology, 56(2):439-450. \n\n\f", "award": [], "sourceid": 1035, "authors": [{"given_name": "Olivier", "family_name": "Coenen", "institution": null}, {"given_name": "Terrence", "family_name": "Sejnowski", "institution": null}]}