{"title": "Linear Combinations of Optic Flow Vectors for Estimating Self-Motion - a Real-World Test of a Neural Model", "book": "Advances in Neural Information Processing Systems", "page_first": 1343, "page_last": 1350, "abstract": null, "full_text": "Linear Combinations of Optic Flow Vectors for\nEstimating Self-Motion \u2013a Real-World Test of a\n\nNeural Model\n\nMatthias O. Franz\n\nMPI f\u00a8ur biologische Kybernetik\n\nSpemannstr. 38\n\nD-72076 T\u00a8ubingen, Germany\nmof@tuebingen.mpg.de\n\nJavaan S. Chahl\n\nCenter of Visual Sciences, RSBS\nAustralian National University\n\nCanberra, ACT, Australia\n\njavaan@zappa.anu.edu.au\n\nAbstract\n\nThe tangential neurons in the \ufb02y brain are sensitive to the typical optic\n\ufb02ow patterns generated during self-motion. In this study, we examine\nwhether a simpli\ufb01ed linear model of these neurons can be used to esti-\nmate self-motion from the optic \ufb02ow. We present a theory for the con-\nstruction of an estimator consisting of a linear combination of optic \ufb02ow\nvectors that incorporates prior knowledge both about the distance distri-\nbution of the environment, and about the noise and self-motion statistics\nof the sensor. The estimator is tested on a gantry carrying an omnidirec-\ntional vision sensor. The experiments show that the proposed approach\nleads to accurate and robust estimates of rotation rates, whereas transla-\ntion estimates turn out to be less reliable.\n\n1\n\nIntroduction\n\nThe tangential neurons in the \ufb02y brain are known to respond in a directionally selective\nmanner to wide-\ufb01eld motion stimuli. A detailed mapping of their local motion sensitivities\nand preferred motion directions shows a striking similarity to certain self-motion-induced\n\ufb02ow \ufb01elds (an example is shown in Fig. 1). This suggests a possible involvement of these\nneurons in the extraction of self-motion parameters from the optic \ufb02ow, which might be\nuseful, for instance, for stabilizing the \ufb02y\u2019s head during \ufb02ight manoeuvres.\n\nA recent study [2] has shown that a simpli\ufb01ed computational model of the tangential neu-\nrons as a weighted sum of \ufb02ow measurements was able to reproduce the observed response\n\ufb01elds. The weights were chosen according to an optimality principle which minimizes\nthe output variance of the model caused by noise and distance variability between differ-\nent scenes. The question on how the output of such processing units could be used for\nself-motion estimation was left open, however.\n\nIn this paper, we want to \ufb01ll a part of this gap by presenting a classical linear estimation\napproach that extends a special case of the previous model to the complete self-motion\nproblem. We again use linear combinations of local \ufb02ow measurements but, instead of\nprescribing a \ufb01xed motion axis and minimizing the output variance, we require that the\nquadratic error in the estimated self-motion parameters be as small as possible. From this\n\n\f)\n.\n\ng\ne\nd\n(\n \n\nn\no\n\ni\nt\n\na\nv\ne\ne\n\nl\n\n75\n\n45\n\n15\n\n\u221215\n\n\u221245\n\n\u221275\n\n0\n\n30\n\n60\n\n90\n\nazimuth (deg.)\n\n120\n\n150\n\n180\n\nFigure 1: Mercator map of the response \ufb01eld of the neuron VS7. The orientation of each\narrow gives the local preferred direction (LPD), and its length denotes the relative local\nmotion sensitivity (LMS). VS7 responds maximally to rotation around an axis at an azimuth\nof about 30(cid:14) and an elevation of about (cid:0)15(cid:14) (after [1]).\n\noptimization principle, we derive weight sets that lead to motion sensitivities similar to\nthose observed in tangential neurons. In contrast to the previous model, this approach also\nyields the preferred motion directions and the motion axes to which the neural models are\ntuned. We subject the obtained linear estimator to a rigorous real-world test on a gantry\ncarrying an omnidirectional vision sensor.\n\n2 Modeling \ufb02y tangential neurons as optimal linear estimators for\n\nself-motion\n\n2.1 Sensor and neuron model\n\nIn order to simplify the mathematical treatment, we assume that the N elementary motion\ndetectors (EMDs) of our model eye are arranged on the unit sphere. The viewing direction\nof a particular EMD with index i is denoted by the radial unit vector di. At each viewing\ndirection, we de\ufb01ne a local two-dimensional coordinate system on the sphere consisting of\ntwo orthogonal tangential unit vectors ui and vi (Fig. 2a). We assume that we measure\nthe local \ufb02ow component along both unit vectors subject to additive noise. Formally, this\nmeans that we obtain at each viewing direction two measurements xi and yi along ui and\nvi, respectively, given by\n\nxi = pi (cid:1) ui + nx;i\n\nand\n\nyi = pi (cid:1) vi + ny;i;\n\n(1)\n\nwhere nx;i and ny;i denote additive noise components and pi the local optic \ufb02ow vector.\nWhen the spherical sensor translates with T while rotating with R about an axis through\nthe origin, the self-motion-induced image \ufb02ow pi at di is [3]\n\npi = (cid:0)(cid:22)i(T (cid:0) (T (cid:1) di)di) (cid:0) R (cid:2) di:\n\n(2)\n\n(cid:22)i is the inverse distance between the origin and the object seen in direction di, the so-\ncalled \u201cnearness\u201d. The entire collection of \ufb02ow measurements xi and yi comprises the\n\n\fa.\n\ny\n\nb.\n\noptic flow\n\nvectors\n\nLPD unit\nvectors\n\nLMSs\n\nsummation\n\ndi\n\nu i\n\npi\n\nv i\n\nz\n\nx\n\nw11\n\nw12\n\nw13\n\n+\n\nFigure 2: a: Sensor model: At each viewing direction di, there are two measurements xi\nand yi of the optic \ufb02ow pi along two directions ui and vi on the unit sphere. b: Simpli\ufb01ed\nmodel of a tangential neuron: The optic \ufb02ow and the local noise signal are projected onto\na unit vector \ufb01eld. The weighted projections are linearly integrated to give the estimator\noutput.\n\ninput to the simpli\ufb01ed neural model of a tangential neuron which consists of a weighted\nsum of all local measurements (Fig. 2b)\n\n^(cid:18) =\n\nN\n\nXi\n\nwx;ixi +\n\nN\n\nXi\n\nwy;iyi\n\n(3)\n\nwith local weights wx;i and wy;i.\nIn this model, the local motion sensitivity (LMS) is\nde\ufb01ned as wi = k(wx;i; wy;i)k, the local preferred motion direction (LPD) is parallel to the\nvector 1\n(wx;i; wy;i). The resulting LMSs and LPDs can be compared to measurements\nwi\non real tangential neurons.\n\nAs our basic hypothesis, we assume that the output of such model neurons is used to es-\ntimate the self-motion of the sensor. Since the output is a scalar, we need in the simplest\ncase an ensemble of six neurons to encode all six rotational and translational degrees of\nfreedom. The local weights of each neuron are chosen to yield an optimal linear estimator\nfor the respective self-motion component.\n\n2.2 Prior knowledge\n\nAn estimator for self-motion consisting of a linear combination of \ufb02ow measurements nec-\nessarily has to neglect the dependence of the optic \ufb02ow on the object distances. As a\nconsequence, the estimator output will be different from scene to scene, depending on the\ncurrent distance and noise characteristics. The best the estimator can do is to add up as\nmany \ufb02ow measurements as possible hoping that the individual distance deviations of the\ncurrent scene from the average will cancel each other. Clearly, viewing directions with low\ndistance variability and small noise content should receive a higher weight in this process.\nIn this way, prior knowledge about the distance and noise statistics of the sensor and its\nenvironment can improve the reliability of the estimate.\nIf the current nearness at viewing direction di differs from the the average nearness (cid:22)(cid:22)i over\nall scenes by (cid:1)(cid:22)i, the measurement xi can be written as ( see Eqns. (1) and (2))\n\nxi = (cid:0)((cid:22)(cid:22)iu>\n\ni ; (ui (cid:2) di)>)(cid:18) T\n\nR (cid:19) + nx;i (cid:0) (cid:1)(cid:22)iuiT;\n\n(4)\n\nwhere the last two terms vary from scene to scene, even when the sensor undergoes exactly\nthe same self-motion.\n\n\fTo simplify the notation, we stack all 2N measurements over the entire EMD array in\nthe vector x = (x1; y1; x2; y2; :::; xN ; yN )>. Similarly, the self-motion components along\nthe x-, y- and z-directions of the global coordinate systems are combined in the vector\n(cid:18) = (Tx; Ty; Tz; Rx; Ry; Rz)>, the scene-dependent terms of Eq. (4) in the 2N-vector\nn = (nx;1 (cid:0) (cid:1)(cid:22)1u1T; ny;1 (cid:0) (cid:1)(cid:22)1v1T; ::::)> and the scene-independent terms in the\n1 ; (cid:0)(v1 (cid:2) d1)>); ::::)>. The entire\n6xN-matrix F = (((cid:0)(cid:22)(cid:22)1u>\nensemble of measurements over the sensor can thus be written as\n\n1 ; (cid:0)(u1 (cid:2) d1)>); ((cid:0)(cid:22)(cid:22)1v>\n\n(5)\nAssuming that T, nx;i, ny;i and (cid:22)i are uncorrelated, the covariance matrix C of the scene-\ndependent measurement component n is given by\n\nx = F (cid:18) + n:\n\n(6)\nwith Cn being the covariance of n, C(cid:22) of (cid:22) and CT of T. These three covariance matrices,\ntogether with the average nearness (cid:22)(cid:22)i, constitute the prior knowledge required for deriving\nthe optimal estimator.\n\nCij = Cn;ij + C(cid:22);ij u>\n\ni CT uj\n\n2.3 Optimized neural model\n\nUsing the notation of Eq. (5), we write the linear estimator as\n\n(7)\nW denotes a 2Nx6 weight matrix where each of the six rows corresponds to one model\nneuron (see Eq. (3)) tuned to a different component of (cid:18). The optimal weight matrix is\nchosen to minimize the mean square error e of the estimator given by\n\n^(cid:18) = W x:\n\ne = E(k(cid:18) (cid:0) ^(cid:18)k2) = tr[W CW >]\n\n(8)\nwhere E denotes the expectation. We additionally impose the constraint that the estimator\nshould be unbiased for n = 0, i.e., ^(cid:18) = (cid:18). From Eqns. (5) and (7) we obtain the constraint\nequation\n\n(9)\nThe solution minimizing the associated Euler-Lagrange functional ((cid:3) is a 6x6-matrix of\nLagrange multipliers)\n\nW F = 16x6:\n\ncan be found analytically and is given by\n\nJ = tr[W CW >] + tr[(cid:3)>(16x6 (cid:0) W F )]\n\nW =\n\n1\n2\n\n(cid:3)F >C (cid:0)1\n\n(10)\n\n(11)\n\nwith (cid:3) = 2(F >C (cid:0)1F )(cid:0)1. When computed for the typical inter-scene covariances of a\n\ufb02ying animal, the resulting weight sets are able to reproduce the characteristics of the LMS\nand LPD distribution of the tangential neurons [2]. Having shown the good correspondence\nbetween model neurons and measurement, the question remains whether the output of such\nan ensemble of neurons can be used for some real-world task. This is by no means evi-\ndent given the fact that - in contrast to most approaches in computer vision - the distance\ndistribution of the current scene is completely ignored by the linear estimator.\n\n3 Experiments\n\n3.1 Linear estimator for an of\ufb01ce robot\n\nAs our test scenario, we consider the situation of a mobile robot in an of\ufb01ce environment.\nThis scenario allows for measuring the typical motion patterns and the associated distance\nstatistics which otherwise would be dif\ufb01cult to obtain for a \ufb02ying agent.\n\n\fa.\n\n75\n\n)\n.\n\ng\ne\nd\n(\n \n\nn\no\n\ni\nt\n\na\nv\ne\ne\n\nl\n\n45\n\n15\n\n-15\n\n-45\n\n 2.25\n\n 2.5\n\n 2.25\n 2\n\n 1.5\n\n 1.25\n\n 1.75\n\n 0.75\n\n 3\n 2.75\n\n \n \n \n\n2\n\n.\n\n5\n\n 1 . 7 5\n\n 2.2 5\n\n 2\n\n 1 . 5\n 1 . 2 5\n\n \n\n \n\n2\n\n.\n\n2\n\n5\n\n 2.5\n 2\n\n 1.75\n\n 1\n\n 0.75\n\n 1\n\n 0.75\n\n-75\n\n-180 -150 -120\n\n-90\n\n-60\n\n-30\n\n0\n\n30\nazimuth (deg.)\n\n60\n\n90\n\n120\n\n150\n\n180\n\nb.\n\n75\n\n)\n.\n\ng\ne\nd\n(\n \n\nn\no\n\ni\nt\n\na\nv\ne\ne\n\nl\n\n45\n\n15\n\n-15\n\n-45\n\n 0.75\n\n 1.25\n\n 1.5\n\n \n\n1\n.\n7\n\n5\n\n 1.5\n\n 1 . 2 5\n\n 1\n 0 .75\n\n 1.25\n\n 1.5\n\n 1.5\n\n 1.75\n\n 1.25\n\n 1\n 0.75\n\n 1.5\n\n \n\n 0.5\n\n 0.25\n\n 0.25\n\n-75\n\n-180 -150 -120\n\n-90\n\n-60\n\n-30\n\n0\n\n30\nazimuth (deg.)\n\n 0.25\n\n 0.25\n\n 0.75\n\n 0.5\n\n 1\n\n 0.25\n\n 0.5\n\n 0.75\n\n 1\n\n 1.25\n\n \n\n1\n.\n2\n\n 1.75\n 1\n\n5\n\n 0.75\n\n 0.5\n\n 0.25\n\n60\n\n90\n\n120\n\n150\n\n180\n\nFigure 3: Distance statistics of an indoor robot (0 azimuth corresponds to forward direc-\ntion): a: Average distances from the origin in the visual \ufb01eld (N = 26). Darker areas\nrepresent larger distances. b: Distance standard deviation in the visual \ufb01eld (N = 26).\nDarker areas represent stronger deviations.\n\nThe distance statistics were recorded using a rotating laser scanner. The 26 measurement\npoints were chosen along typical trajectories of a mobile robot while wandering around\nand avoiding obstacles in an of\ufb01ce environment. The recorded distance statistics therefore\nre\ufb02ect properties both of the environment and of the speci\ufb01c movement patterns of the\nrobot. From these measurements, the average nearness (cid:22)(cid:22)i and its covariance C(cid:22) were\ncomputed (cf. Fig. 3, we used distance instead of nearness for easier interpretation).\n\nThe distance statistics show a pronounced anisotropy which can be attributed to three main\ncauses: (1) Since the robot tries to turn away from the obstacles, the distance in front and\nbehind the robot tends to be larger than on its sides (Fig. 3a). (2) The camera on the robot\nusually moves at a \ufb01xed height above ground on a \ufb02at surface. As a consequence, distance\nvariation is particularly small at very low elevations (Fig. 3b). (3) The of\ufb01ce environment\nalso contains corridors. When the robot follows the corridor while avoiding obstacles,\ndistance variations in the frontal region of the visual \ufb01eld are very large (Fig. 3b).\nThe estimation of the translation covariance CT is straightforward since our robot can only\ntranslate in forward direction, i.e. along the z-axis. CT is therefore 0 everywhere except\nthe lower right diagonal entry which is the square of the average forward speed of the robot\n(here: 0.3 m/s). The EMD noise was assumed to be zero-mean, uncorrelated and uniform\nover the image, which results in a diagonal Cn with identical entries. The noise standard\n\n\fa.\n\n)\n.\ng\ne\nd\n(\n \nn\no\ni\nt\na\nv\ne\ne\n\nl\n\n75\n\n45\n\n15\n\n-15\n\n-45\n\n-75\n\nb.\n\n)\n.\ng\ne\nd\n(\n \nn\no\ni\nt\na\nv\ne\ne\n\nl\n\n75\n\n45\n\n15\n\n-15\n\n-45\n\n-75\n\n0\n\n30\n\n60\nazimuth (deg.)\n\n90\n\n120\n\n150\n\n180\n\n0\n\n30\n\n60\nazimuth (deg.)\n\n90\n\n120\n\n150\n\n180\n\nFigure 4: Model neurons computed as part of the linear estimator. Notation is identical\nto Fig. 1. The depicted region of the visual \ufb01eld extends from (cid:0)15(cid:14) to 180(cid:14) azimuth and\nfrom (cid:0)75(cid:14) to 75(cid:14) elevation. The model neurons are tuned to a. forward translation, and b.\nto rotations about the vertical axis.\n\ndeviation of 0.34 deg./s was determined by presenting a series of natural images moving at\n1.1 deg./s to the \ufb02ow algorithm used in the implementation of the estimator (see Sect. 3.2).\n(cid:22)(cid:22), C(cid:22), CT and Cn constitute the prior knowledge necessary for computing the estimator\n(Eqns. (6) and (11)).\n\nExamples of the optimal weight sets for the model neurons (corresponding to a row of\nW ) are shown in Fig. 4. The resulting model neurons show very similar characteristics to\nthose observed in real tangential neurons, however, with speci\ufb01c adaptations to the indoor\nrobot scenario. All model neurons have in common that image regions near the rotation or\ntranslation axis receive less weight. In these regions, the self-motion components to be esti-\nmated generate only small \ufb02ow vectors which are easily corrupted by noise. Equation (11)\npredicts that the estimator will preferably sample in image regions with smaller distance\nvariations. In our measurements, this is mainly the case at the ground around the robot\n(Fig. 3). The rotation-selective model neurons weight image regions with larger distances\nmore highly, since distance variations at large distances have a smaller effect. In our exam-\nple, distances are largest in front and behind the robot so that the rotation-selective neurons\nassign the highest weights to these regions (Fig. 3b).\n\n3.2 Gantry experiments\n\nThe self-motion estimates from the model neuron ensemble were tested on a gantry with\nthree translational and one rotational (yaw) degree of freedom. Since the gantry had a\nposition accuracy below 1mm, the programmed position values were taken as ground truth\nfor evaluating the estimator\u2019s accuracy.\n\nAs vision sensor, we used a camera mounted above a mirror with a circularly symmetric\nhyperbolic pro\ufb01le. This setup allowed for a 360(cid:14) horizontal \ufb01eld of view extending from\n90(cid:14) below to 45(cid:14) above the horizon. Such a large \ufb01eld of view considerably improves\nthe estimator\u2019s performance since the individual distance deviations in the scene are more\nlikely to be averaged out. More details about the omnidirectional camera can be found in\n[4]. In each experiment, the camera was moved to 10 different start positions in the lab\nwith largely varying distance distributions. After recording an image of the scene at the\nstart position, the gantry translated and rotated at various prescribed speeds and directions\nand took a second image. After the recorded image pairs (10 for each type of movement)\nwere unwarped, we computed the optic \ufb02ow input for the model neurons using a standard\ngradient-based scheme [5].\n\n\fa.\n\nn\no\n\ni\nt\n\no\nm\n\n-\nf\nl\n\ne\ns\n \n\nt\n\nd\ne\na\nm\n\ni\nt\ns\ne\n\n20\n\n18\n\n16\n\n14\n\n12\n\n10\n\n8\n\n6\n\n4\n\n4\n\nc.\n\ne\ns\nn\no\np\ns\ne\nr\n \nr\no\n\nt\n\na\nm\n\ni\nt\ns\ne\n\n0.6\n\n0.5\n\n0.4\n\n0.3\n\n0.2\n\n0.1\n\n0\n\nrotation\n\n6\n\n8\n\ntranslation\n\n16\n\n18\n\n20\n\n22\n\n12\n\n10\ntrue self-motion\n\n14\n\n1\n\n2\n\n3\n\nb.\n\n150\n\n]\n\n%\n\n[\n \n\ne\ns\nn\no\np\ns\ne\nr\n \nr\no\na\nm\n\nt\n\ni\nt\ns\ne\n\n100\n\n50\n\n0\n\ne\ns\nn\no\np\ns\ne\nr\n \nr\no\n\nt\n\na\nm\n\ni\nt\ns\ne\n\n0.6\n\n0.5\n\n0.4\n\n0.3\n\n0.2\n\n0.1\n\n0\n\nd.\n\n1\n\n2\n\n3\n\n4\n\n5\n\n1\n\n2\n\n3\n\nFigure 5: Gantry experiments: Results are given in arbitrary units, true rotation values\nare denoted by a dashed line, translation by a dash-dot line. Grey bars denote translation\nestimates, white bars rotation estimates a: Estimated vs. real self-motion; b: Estimates of\nthe same self-motion at different locations; c: Estimates for constant rotation and varying\ntranslation; d: Estimates for constant translation and varying rotation.\n\nThe average error of the rotation rate estimates over all trials (N=450) was 0.7(cid:14)/s (5.7%\nrel. error, Fig. 5a), the error in the estimated translation speeds (N=420) was 8.5 mm/s\n(7.5% rel. error). The estimated rotation axis had an average error of magnitude 1.7(cid:14),\nthe estimated translation direction 4.5(cid:14). The larger error of the translation estimates is\nmainly caused by the direct dependence of the translational \ufb02ow on distance (see Eq. (2))\nwhereas the rotation estimates are only indirectly affected by distance errors via the current\ntranslational \ufb02ow component which is largely \ufb01ltered out by the LPD arrangement. The\nlarger sensitivity of the translation estimates can be seen by moving the sensor at the same\ntranslation and rotation speeds in various locations. The rotation estimates remain consis-\ntent over all locations whereas the translation estimates show a higher variance and also a\nlocation-dependent bias, e.g., very close to laboratory walls (Fig. 5b). A second problem\nfor translation estimation comes from the different properties of rotational and translational\n\ufb02ow \ufb01elds: Due to its distance dependence, the translational \ufb02ow \ufb01eld shows a much wider\nrange of values than a rotational \ufb02ow \ufb01eld. The smaller translational \ufb02ow vectors are often\nswamped by simultaneous rotation or noise, and the larger ones tend to be in the upper\nsaturation range of the used optic \ufb02ow algorithm. This can be demonstrated by simultane-\nously translating and rotating the semsor. Again, rotation estimates remain consistent while\ntranslation estimates are strongly affected by rotation (Fig. 5c and d).\n\n\f4 Conclusion\n\nOur experiments show that it is indeed possible to obtain useful self-motion estimates from\nan ensemble of linear model neurons. Although a linear approach necessarily has to ignore\nthe distances of the currently perceived scene, an appropriate choice of local weights and\na large \ufb01eld of view are capable of reducing the in\ufb02uence of noise and the particular scene\ndistances on the estimates. In particular, rotation estimates were highly accurate - in a range\ncomparable to gyroscopic estimates - and consistent across different scenes and different\nsimultaneous translations. Translation estimates, however, turned out to be less accurate\nand less robust against changing scenes and simultaneous rotation.\n\nThe components of the estimator are simpli\ufb01ed model neurons which have been shown to\nreproduce the essential receptive \ufb01eld properties of the \ufb02y\u2019s tangential neurons [2]. Our\nstudy suggests that the output of such neurons could be directly used for self-motion esti-\nmation by simply combining them linearly at a later integration stage. As our experiments\nhave shown, the achievable accuracy would probably be more than enough for head stabi-\nlization under closed loop conditions.\n\nFinally, we have to point out a basic limitation of the proposed theory: It assumes linear\nEMDs as input to the neurons (see Eq. (1)). The output of \ufb02y EMDs, however, is only\nlinear for very small image motions. It quickly saturates at a plateau value at higher image\nvelocities. In this range, the tangential neuron can only indicate the presence and the sign of\na particular self-motion component, not the current rotation or translation velocity. A linear\ncombination of output signals, as in our model, is no more feasible but would require some\nform of population coding. In addition, a detailed comparison between the linear model\nand real neurons shows characteristic differences indicating that tangential neurons usually\noperate in the plateau range rather than in the linear range of the EMDs [2]. As a conse-\nquence, our study can only give a hint on what might happen at small image velocities. The\ncase of higher image velocities has to await further research.\n\nAcknowledgments\n\nThe gantry experiments were done at the Center of Visual Sciences in Canberra. The\nauthors wish to thank J. Hill, M. Hofmann and M. V. Srinivasan for their help. Finan-\ncial support was provided by the Human Frontier Science Program and the Max-Planck-\nGesellschaft.\n\nReferences\n\n[1] Krapp, H.G., Hengstenberg, B., & Hengstenberg, R. (1998). Dendritic structure and receptive\n\ufb01eld organization of optic low processing interneurons in the \ufb02y. J. of Neurophysiology, 79, 1902 -\n1917.\n\n[2] Franz, M. O. & Krapp, H C. (2000). Wide-\ufb01eld, motion-sensitive neurons and matched \ufb01lters for\noptic \ufb02ow \ufb01elds. Biol. Cybern., 83, 185 - 197.\n\n[3] Koenderink, J. J., & van Doorn, A. J. (1987). Facts on optic \ufb02ow.Biol. Cybern., 56, 247 - 254.\n\n[4] Chahl, J. S, & Srinivasan, M. V. (1997). Re\ufb02ective surfaces for panoramic imaging. Applied\nOptics, 36(31), 8275 - 8285.\n\n[5] Srinivasan, M. V. (1994). An image-interpolation technique for the computation of optic \ufb02ow and\negomotion. Biol. Cybern., 71, 401 - 415.\n\n\f", "award": [], "sourceid": 2247, "authors": [{"given_name": "Matthias", "family_name": "Franz", "institution": null}, {"given_name": "Javaan", "family_name": "Chahl", "institution": null}]}