{"title": "Tracking Changing Stimuli in Continuous Attractor Neural Networks", "book": "Advances in Neural Information Processing Systems", "page_first": 481, "page_last": 488, "abstract": "Continuous attractor neural networks (CANNs) are emerging as promising models for describing the encoding of continuous stimuli in neural systems. Due to the translational invariance of their neuronal interactions, CANNs can hold a continuous family of neutrally stable states. In this study, we systematically explore how neutral stability of a CANN facilitates its tracking performance, a capacity believed to have wide applications in brain functions. We develop a perturbative approach that utilizes the dominant movement of the network stationary states in the state space. We quantify the distortions of the bump shape during tracking, and study their effects on the tracking performance. Results are obtained on the maximum speed for a moving stimulus to be trackable, and the reaction time to catch up an abrupt change in stimulus.", "full_text": "Tracking Changing Stimuli in Continuous Attractor\n\nNeural Networks\n\nDepartment of Physics, The Hong Kong University of Science and Technology,\n\nC. C. Alan Fung, K. Y. Michael Wong\n\nClear Water Bay, Hong Kong, China\n\nalanfung@ust.hk, phkywong@ust.hk\n\nSi Wu\n\nDepartment of Informatics, University of Sussex, Brighton, United Kingdom\n\nInstitute of Neuroscience, Shanghai Institutes for Biological Sciences,\n\nState Key Laboratory of Neurobiology, Chinese Academy of Sciences, Shanghai 200031, China.\n\nsiwu@ion.ac.cn\n\nAbstract\n\nContinuous attractor neural networks (CANNs) are emerging as promising mod-\nels for describing the encoding of continuous stimuli in neural systems. Due to\nthe translational invariance of their neuronal interactions, CANNs can hold a con-\ntinuous family of neutrally stable states. In this study, we systematically explore\nhow neutral stability of a CANN facilitates its tracking performance, a capacity\nbelieved to have wide applications in brain functions. We develop a perturbative\napproach that utilizes the dominant movement of the network stationary states in\nthe state space. We quantify the distortions of the bump shape during tracking,\nand study their effects on the tracking performance. Results are obtained on the\nmaximum speed for a moving stimulus to be trackable, and the reaction time to\ncatch up an abrupt change in stimulus.\n\n1 Introduction\n\nUnderstanding how the dynamics of a neural network is shaped by the network structure, and con-\nsequently facilitates the functions implemented by the neural system, is at the core of using mathe-\nmatical models to elucidate brain functions [1]. The impact of the network structure on its dynamics\nis twofold: on one hand, it decides stationary states of the network which leads to associative mem-\nory; and on the other hand, it carves the landscape of the state space of the network as a whole\nwhich may contribute to other cognitive functions, such as movement control, spatial navigation,\npopulation decoding and object categorization.\n\nRecently, a type of attractor networks, called continuous attractor neural networks (CANNs), has\nreceived considerable attention (see, e.g., [2, 3, 4, 6, 7, 8, 9, 10, 11, 12, 13, 5]). These networks\npossess a translational invariance of the neuronal interactions. As a result, they can hold a family of\nstationary states which can be translated into each other without the need to overcome any barriers.\nThus, in the continuum limit, they form a continuous manifold in which the system is neutrally\nstable, and the network state can translate easily when the external stimulus changes continuously.\nBeyond pure memory retrieval, this large-scale stucture of the state space endows the neural system\nwith a tracking capability. This is different from conventional models of associative memory, such\nas the Hop\ufb01eld model [14], in which the basin of each attactor is well separated from the others.\n\nThe tracking dynamics of a CANN has been investigated by several authors in the literature (see,\ne.g., [3, 4, 5, 8, 11]). These studies have shown that a CANN has the capacity of tracking a moving\n\n1\n\n\fstimulus continuously and that this tracking property can well justify many brain functions. Despite\nthese successes, however, a detailed analysis of the tracking behaviors of a CANN is still lacking.\nThese include, for instance, 1) the conditions under which a CANN can successfully track a moving\nstimulus, 2) the distortion of the shape of the network state during the tracking, and 3) the effects\nof these distortions on the tracking speed. In this paper we will report, as far as we know, the \ufb01rst\nsystematic study on these issues. We hope this study will help to establish a complete picture about\nthe potential applications of CANNs in neural systems.\n\nWe will use a simple, analytically-solvable, CANN model as the working example. We display\nclearly how the dynamics of a CANN is decomposed into different distortion modes, corresponding\nto, respectively, changes in the height, position, width and skewness of the network state. We then\ndemonstrate which of them dominates the tracking behaviors of the network. In order to solve the\ndynamics which is otherwise extremely complicated for a large recurrent network, we develop a\ntime-dependent perturbation method to approximate the tracking performance of the network. The\nsolution is expressed in a simple closed-form, and we can approximate the network dynamics up to\nan arbitory accuracy depending on the order of perturbation used. We expect that our method will\nprovide a useful tool for the theoretical studies of CANNs. Our work generates new predictions on\nthe tracking behaviors of CANNs, namely, the maximum tracking speed to moving stimuli, and the\nreaction time to sudden changes in external stimuli, both are testable by experiments.\n\n2 The Intrinsic Dynamics of CANNs\n\nWe consider a one-dimensional continuous stimulus being encoded by an ensemble of neurons. The\nstimulus may represent, for example, the moving direction, the orientation, or a general continuous\nfeature of an external object. Let U (x, t) be the synaptic input at time t to the neurons with preferred\nstimulus of real-valued x. We will consider stimuli and responses with correlation length a much\nless than the range of x, so that the range can be effectively taken to be (\u2212\u221e,\u221e). The \ufb01ring rate\nr(x, t) of these neurons increases with the synaptic input, but saturates in the presence of a global\nactivity-dependent inhibition. A solvable model that captures these features is given by\n\nr(x, t) =\n\nU (x, t)2\n\n1 + k\u03c1R dx0U (x0, t)2 ,\n\nwhere \u03c1 is the neural density, and k is a small positive constant controlling the strength of global\ninhibition. The dynamics of the synaptic input U (x, t) is determined by the external input Iext(x, t),\nthe network input from other neurons, and its own relaxation. It is given by\n\n\u03c4\n\ndU (x, t)\n\ndt\n\n= Iext(x, t) + \u03c1Z dx0J(x, x0)r(x0, t) \u2212 U (x, t),\n\nwhere \u03c4 is the time constant, which is typically of the order 1 ms, and J(x, x0) is the neural inter-\naction from x0 to x. The key characteristic of CANNs is the translational invariance of their neural\ninteractions. In our solvable model, we choose Gaussian interactions with a range a, namely,\n\nJ(x, x0) = exp[\u2212(x \u2212 x0)2/(2a2)]J/\u221a2\u03c0a2.\n\n(3)\nCANN models with other neural interactions and inhibition mechanisms have been studied [2, 3, 4,\n7, 9]. However, our model has the advantage of permitting a systematic perturbative improvement.\nNevertheless, the \ufb01nal conclusions of our model are qualitatively applicable to general cases (to be\nfurther discussed at the end of the paper).\n\nWe \ufb01rst consider the intrinsic dynamics of the CANN model in the absence of external stimuli. For\n\n0 < k < kc \u2261 \u03c1J 2/(8\u221a2\u03c0a), the network holds a continuous family of stationary states, which are\n\n(1)\n\n(2)\n\n(4)\n\n\u02dcU (x|z) = U0 exp\u00b7\u2212\n\n(x \u2212 z)2\n\n4a2\n\n\u00b8 ,\n\nwhere U0 = [1 + (1 \u2212 k/kc)1/2]J/(4\u221a\u03c0ak). These stationary states are translationally invariant\n\namong themselves and have the Gaussian bumped shape peaked at arbitrary positions z.\nThe stability of the Gaussian bumps can be studied by considering the dynamics of \ufb02uctuations.\nConsider the network state U (x, t) = \u02dcU (x|z) + \u03b4U (x, t). Then we obtain\n\u03b4U (x, t) =Z dx0F (x, x0)\u03b4U (x0, t) \u2212 \u03b4U (x, t),\n\nd\ndt\n\n(5)\n\n\u03c4\n\n2\n\n\fHeight\n1\n\n0.5\n\n0\n\n-1\n\n-0.5\n\n0\n\n1\n\n2\n\n-2\n\n-1\nWidth\n1\n\n0.5\n\n0\n\n-1\n\n-0.5\n\n0\n\n1\n\n2\n\n-2\n\nv1\n\nv3\n\nv0\n\nv2\n\n-2\n\n-2\n\nPosition\n0.5\n\n0.25\n\n0\n\n-1\n-0.25\n\n0\n\n-0.5\n\nSkew\n1\n\n1\n\n2\n\n0.5\n\n0\n\n-1\n\n-0.5\n\n0\n\n1\n\n2\n\n-1\n\n-1\n\nFigure 1: The \ufb01rst four basis functions of the quantum harmonic oscillators, which represent four\ndistortion modes of the network dynamics, namely, changes in the height, position, width and skew-\nness of a bump state.\n\nwhere the interaction kernel is given by F (x, x0) = \u03c1R dx00J(x, x00)\u2202r(x00)/\u2202U (x0).\n\n2.1 The motion modes\n\nTo compute the eigenfunctions and eigenvalues of the kernel F (x, x0), we choose the wave functions\nof the quantum harmonic oscillators as the basis, namely,\n\nvn(x|z) =\n\nexp(\u2212\u03be2/2)Hn(\u03be)\np(2\u03c0)1/2an!2n\n\n,\n\n(6)\n\nwhere \u03be \u2261 (x \u2212 a)/(\u221a2a) and Hn(\u03be) is the nth order Hermite polynomial function. Indeed, the\n\nground state of the quantum harmonic oscillator corresponds to the Gaussian bump, and the \ufb01rst,\nsecond, and third excited states correspond to \ufb02uctuations in the peak position, width, and skewness\nof the bump respectively (see Fig. 1). The eigenvalues of the kernel F are calculated to be\n\nfor n \u2265 1.\n\n\u03bb0 = 1 \u2212 (1 \u2212 k/kc)1/2; \u03bbn = 1/2n\u22121,\n\n(7)\nThe eigenfunctions of F can also be analytically calculated, which turn out to be either the\nbasis functions vn(x|z) or a linear combination of them. Here we only list the \ufb01rst four of\nthem, which are u0(x|z) = v0(x|z), u1(x|z) = v1(x|z), u2(x|z) = 1/(\u221a2D0)v0(x|z) +\n(1 \u2212 2p1 \u2212 k/kc)/D0v2(x|z), with D0 = [(1 \u2212 2p1 \u2212 k/kc)2 + 1/2]1/2 and u3(x|z) =\np1/7v1(x, z) +p6/7v3(x, z).\n\nThe eigenfunctions of F correspond to the various distortion modes of the bump. Since \u03bb1 = 1\nand all other eigenvalues are less than 1, the stationary state is neutrally stable in one component,\nand stable in all other components. The \ufb01rst two eigenfunctions are particularly important.\n(1)\nThe eigenfunction for the eigenvalue \u03bb0 is u0(x|z), and represents a distortion of the amplitude of\nthe bump. As we shall see, amplitude changes of the bump affect its tracking performance. (2)\nCentral to the tracking capability of CANNs, the eigenfunction for the eigenvalue 1 is u1(x|z) and\nis neutrally stable. We note that u1(x|z) \u221d \u2202v0(x|z)/\u2202z, corresponding to the shift of the bump\nposition among the stationary states. This neutral stability is the consequence of the translational\ninvariance of the network. It implies that when there are external inputs, however small, the bump\nwill move continuously. This is a unique property associated with the special structure of a CANN,\nnot shared by other attractor models. Other eigenfunctions correspond to distortions of the shape of\nthe bump, for example, the eigenfunction u3(x|z) corresponds to a skewed distortion of the bump.\n2.2 The energy landscape\n\nIt is instructive to consider the energy landscape in the state space of a CANN. Since F (x, x0) is not\nsymmetric, a Lyapunov function cannot be derived for Eq. (5). Nevertheless, for each peak position\nz/2, where bn|z is the overlap\n\nz, one can de\ufb01ne an effective energy function E|z =Pn(1 \u2212 \u03bbn)bn|2\n\n3\n\n\f2\n\n1.5\n\n)\nx\n(\nU\n\n1\n\n0.5\n\n0\n\n-2\n\n0\nx\n\n2\n\nFigure 2: The canyon formed by the stationary states of a CANN projected onto the subspace formed\nby b1|0, the position shift, and b0|0, the height distortion. Motion along the canyon corresponds to\nthe displacement of the bump (inset).\n\nbetween U (x)\u2212 \u02dcU (x|z) and the nth eigenfunction of F centered at z. Then the dynamics in Eq. (5)\ncan be locally described by the gradient descent of E|z in the space of bn|z. Since the set of points\nbn|z = 0 for n 6= 1 traces out a line with E|z = 0 in the state space when z varies, one can envisage a\ncanyon surrounding the line and facilitating the local gradient descent dynamics, as shown in Fig. 2.\nA small force along the tangent of the canyon can move the network state easily. This illustrates\nhow the landscape of the state space of a CANN is shaped by the network structure, leading to the\nneutral stability of the system, and how this neutral stability shapes the network dynamics.\n\n3 The Tracking Behaviors\n\nWe now consider the network dynamics in the presence of a weak external stimulus. Suppose the\nneural response at time t is peaked at z(t). Since the dynamics is primarily dominated by the trans-\nlational motion of the bump, with secondary distortions in shape, we may develop a time-dependent\nperturbation analysis using {vn(x|z(t))} as the basis, and consider perturbations in increasing orders\nof n. This is done by considering solutions of the form\n\n\u221e\n\nU (x, t) = \u02dcU (x|z(t)) +\n\nan(t)vn(x|z(t)).\n\n(8)\n\nXn=0\n\nFurthermore, since the Gaussian bump is the steady-state solution of the dynamical equation in\nthe absence of external stimuli, the neuronal interaction term in Eq. (2) can be linearized for weak\nstimuli. Making use of the orthonormality and completeness of {vn(x|z(t))}, we obtain from Eq. (2)\nexpressions for dan/dt at each order n of perturbation, which are\n\n\u00c3 d\n\ndt\n\n+\n\n\u03c4 !an =\n1 \u2212 \u03bbn\n\n+\n\nIn\n\n1\n\u03c4\n\n\u221e\n\n\u03c4 \u2212\"U0q(2\u03c0)1/2a\u03b4n1 + \u221anan\u22121 \u2212 \u221an + 1an+1# 1\nXr=1r (n + 2r)!\n\n(\u22121)r\n2n+3r\u22121r!\n\nan+2r,\n\nn!\n\n2a\n\nwhere In(t) is the projection of the external input Iext(x, t) on the nth eigenfunction.\nDetermining z(t) by the center of mass of U (x, t), we obtain the self-consistent condition\n\ndz\ndt\n\n(9)\n\n(10)\n\ndz\ndt\n\n=\n\n2a\n\n\u03c4 \u00c3\n\nI1 +P\u221e\nU0p(2\u03c0)1/2a +P\u221e\n\nn=3,oddpn!!/(n \u2212 1)!!In + a1\n\nn=0,evenp(n \u2212 1)!!/n!!an! .\n\nEqs.(9) and (10) are the master equations of the perturbation method. We can approximate the\nnetwork dynamics up to an arbitary accuracy depending on the choice of the order of perturbation.\nIn practice, low order perturbations already yield very accurate results.\n\n3.1 Tracking a moving stimulus\n\nConsider the external stimulus consisting of a Gaussian bump, namely, Iext(x, t) = \u03b1U0 exp[\u2212(x\u2212\nz0)2/4a2]. Perturbation up to the order n = 1 yields a1(t) = 0, [d/dt + (1 \u2212 \u03bb0)/\u03c4 ]a0 =\n\n4\n\n\f(a)\n\n4\n\n3\n\ns\n\n2\n\n1\n\n0\n0\n\n1\n\n(b)\n\n0.8\n\n0.6\n\ns\n\n0.4\n\n0.2\n\n0\n0\n\n0.01\n\n0.02\n\nv\n\nvmax\n\n0.03\n\n0.04\n\n50\n\n100\n\nt\n\n150\n\n200\n\n250\n\nFigure 3: (a) The time dependence of the separation s starting from different initial values. Symbols:\nsimulations with N = 200 and v = 0.025. Lines: n = 5 perturbation. Dashed lines: s1 (bottom)\nand s2 (top). (b) The dependence of the terminal separation s on the stimulus speed v. Symbols:\nsimulations with N = 200. Dashed line: n = 1 perturbation. Parameters: \u03b1 = 0.05, a = 0.5,\n\u03c4 = 1, k = 0.5, \u03c1 = N/(2\u03c0), J = \u221a2\u03c0a2.\n\n\u03b1U0p(2\u03c0)1/2a exp[\u2212(z0 \u2212 z)2/8a2]/\u03c4 , and\n\ndz\ndt\n\n=\n\n\u03b1\n\u03c4\n\n(z0 \u2212 z) exp\u00b7\u2212\n\n(z0 \u2212 z)2\n\n8a2\n\n\u00b8 R(t)\u22121,\n\n(11)\n\nwhere R(t) = 1 + \u03b1R t\n\n\u2212\u221e(dt0/\u03c4 ) exp[\u2212(1 \u2212 \u03bb0)(t \u2212 t0)/\u03c4 \u2212 (z0 \u2212 z(t0))2/8a2], representing the\nratio of the bump height relative to that in the absence of the external stimulus (\u03b1 = 0). Hence,\nthe dynamics is driven by a pull of the bump position towards the stimulus position z0. The factor\nR(t) > 1 implies that the increase in amplitude of the bump slows down its response.\nThe tracking performance of a CANN is a key property that is believed to have wide applications in\nneural systems. Suppose the stimulus is moving at a constant velocity v. The dynamical equation\nbecomes identical to Eq. (11), with z0 = vt. Denoting the lag of the bump behind the stimulus by\ns = z0 \u2212 z we have, after the transients,\n\nds\ndt\n\n= v \u2212 g(s); g(s) \u2261\n\n\u03b1se\u2212s2/8a2\n\n\u03c4\n\n\"1 +\n\n\u03b1e\u2212s2/8a2\n\n1 \u2212 \u03bb0 #\u22121\n\n.\n\n(12)\n\nThe value of s is determined by two competing factors: the \ufb01rst term represents the movement of\nthe stimulus, which tends to enlarge the separation, and the second term represents the collective\neffects of the neuronal recurrent interactions, which tends to reduce the lag. Tracking is maintained\nwhen these two factors match each other, i.e., v = g(s); otherwise, s diverges.\nThe function g(s) is concave, and has the maximum value of gmax = 2\u03b1a/(\u03c4\u221ae) at s = 2a.\nThis means that if v > gmax, the network is unable to track the stimulus. Thus, gmax de\ufb01nes the\nmaximum trackable speed of a moving stimulus. Notably, gmax increases with the strength of the\nexternal signal and the range of neuronal recurrent interactions. This is reasonable since it is the\nneuronal interactions that induce the movement of the bump. gmax decreases with the time constant\nof the network, as this re\ufb02ects the responsiveness of the network to external inputs.\n\nOn the other hand, for v < gmax, there is a stable and unstable \ufb01xed point of Eq. (12), respectively\ndenoted by s1 and s2. When the initial distance is less than s2, it will converge to s1. Otherwise, the\ntracking of the stimulus will be lost. Figs. 3(a) and (b) show that the analytical results of Eq. (12)\nwell agree with the simulation results.\n\n3.2 Tracking an abrupt change of the stimulus\n\nSuppose the network has reached a steady state with an external stimulus stationary at t < 0, and\nthe stimulus position jumps from 0 to z0 suddenly at t = 0. This is a typical scenario in experi-\nments studying mental rotation behaviors. We \ufb01rst consider the case that the jump size z0 is small\ncompared with the range a of neuronal interactions. In the limit of weak stimulus, the dynamics is\ndescribed by Eq. (11) with R(t) = 1. We are interested in estimating the reaction time T , which is\n\n5\n\n\fthe time taken by the bump to move to a small distance \u03b8 from the stimulus position. The reaction\ntime increases logarithmically with the jump size, namely, T \u2248 (\u03c4 /\u03b1) ln(|z0|/\u03b8).\n\n(a)\n\n400\n\n300\n\nT\n\n200\n\n100\n\nSimulation\n\"n=1\" perturbation\n\"n=2\" perturbation\n\"n=3\" perturbation\n\"n=4\" perturbation\n\"n=5\" perturbation\n\n0\n0\n\n0.5\n\n1\n\n1.5\nz0\n\n2\n\n2.5\n\n3\n\n(b)\n\n2\n\n1.5\n\n)\nx\n(\nU\n\n1\n\n0.5\n\n0\n\n-2\n\n0\nx\n\n2\n\nFigure 4: (a) The dependence of the reaction time T on the new stimulus position z0. Parameters: as\nin Fig.3. (b) Pro\ufb01les of the bump between the old and new positions at z0 = \u03c0/2 in the simulation.\n\nWhen the strength \u03b1 of the external stimulus is larger, improvement using a perturbation analysis\nup to n = 1 is required when the jump size z0 is large. This amounts to taking into account the\nchange of the bump height during its movement from the old to new position. The result is identical\nto Eq. (11), with R(t) replaced by\n\n\u03b1\n\n1 \u2212 \u03bb0\n\nexp\u00b7\u2212\n\n(1 \u2212 \u03bb0)\n\n\u03c4\n\nt\u00b8 + \u03b1Z t\n\n0\n\ndt0\n\u03c4\n\nexp\u00b7\u2212\n\n(1 \u2212 \u03bb0)\n\n\u03c4\n\n(t \u2212 t0) \u2212\n\n(z0 \u2212 z(t0))2\n\n8a2\n\n\u00b8 .\n\nR(t) = 1 +\n\n(13)\nIndeed, R(t) represents the change in height during the movement of the bump. Contributions from\nthe second and third terms show that it is highest at the initial and \ufb01nal positions respectively, and\nlowest at some point in between, agreeing with simulation results shown in Fig. 4(b). Fig. 4(a)\nshows that the n = 1 perturbation overcomes the insuf\ufb01ciency of the logarithmic estimate, and has\nan excellent agreement with simulation results for z0 up to the order of 2a. We also compute the\nreaction time up to the n = 5 perturbation, and the agreement with simulations remains excellent\neven when z0 goes beyond 2a. This implies that beyond the range of neuronal interaction, tracking\nis in\ufb02uenced by the distortion of the width and the skewed shape of the bump.\n\n4 The Two-Dimensional Case\n\nWe can straightforwardly extend the above analysis to two-dimensional (2D) CANNs. Consider\na neural ensemble encoding a 2D continuous stimulus x = (x1, x2), and the network dynamics\nsatis\ufb01es Eqs. (1-3) with x and x0 being replaced by x and x\n0, respectively. We can check that the\nnetwork holds a continuous family of stationary states given by\n(x \u2212 z)2\n\n(14)\n\n\u02dcU (x|z) = U0 exp\u00b7\u2212\n\n4a2\n\n\u00b8 ,\n\nwhere z is a free parameter indicating the position of the network state in a 2D manifold, and\n(x \u2212 z)2 = (x1 \u2212 z1)2 + (x2 \u2212 z2)2 the Euclidean distance between x and z.\nBy applying the stability analysis as in Sec. 2, we obtain the distortion modes of the bump dynamics,\nwhich are expressed as the product of the motion modes in the 1D case, i.e.,\n\num,n(x|z) = um(x1|z1)un(x2|z2),\n\n(15)\nThe eigenvalues for these motion modes are calculated to be \u03bb0,0 = \u03bb0, \u03bbm,0 = \u03bbm, for m 6= 0,\n\u03bb0,n = \u03bbn, for n 6= 0, and \u03bbm,n = \u03bbm\u03bbn, for m 6= 0 and n 6= 0.\nThe mode u1,0(x|z) corresponds to the position shift of the bump in the direction x1 and u0,1(x|z)\nthe position shift in the direction x2. A linear combination of them, c1u1,0(x|z) + c2u0,1(x|z),\ncorresponds to the position shift of the bump in the direction (c1, c2). We see that the eigenvalues\n\nfor m, n = 0, 1, 2, . . .\n\n6\n\n\ffor these motion modes are 1, implying that the network is neutrally stable in the 2D manifold.\nThe eigenvalues for all other motion modes are less than 1. Figure 5 illustrates the tracking of a\n2D stimulus, and the comparison of simulation results on the reaction time with the perturbative\napproach. The n = 1 perturbation already has an excellent agreement over a wide range of stimulus\npositions.\n\n(a)\n 1.2\n 1\n 0.8\n 0.6\n 0.4\n 0.2\n 0\n\n)\ny\n,\nx\n(\nU\n\n-3 -2 -1 0 1 2 3 -3\n\nx\n\nSimulation\nTheory\n\n400\n\n(b)\n\n300\n\nT\n\n200\n\n100\n\n0\n0\n\n0.5\n\n1\n\n1.5\n\n|z0 - z(0)|\n\n2\n\n2.5\n\n3\n\n 3\n\n 2\n 1\ny\n\n 0\n\n-1\n\n-2\n\nFigure 5: (a) The tracking process of the network; (b) The reaction time vs.\nthe jump size. The\nsimulation result is compared with the theoretical prediction. Parameters: N = 40 \u00d7 40, k = 0.5,\na = 0.5, \u03c4 = 1, J = \u221a2\u03c0a2, \u03c1 = N/(2\u03c0)2 and \u03b1 = 0.05.\n\n5 Conclusions and Discussions\n\nTo conclude, we have systematically investigated how the neutral stability of a CANN facilitates\nthe tracking performance of the network, a capability which is believed to have wide applications in\nbrain functions. Two interesting behaviors are observed, namely, the maximum trackable speed for\na moving stimulus and the reaction time for catching up an abrupt change of a stimulus, logarithmic\nfor small changes and increasing rapidly beyond the neuronal range. These two properties are asso-\nciated with the unique dynamics of a CANN. They are testable in practice and can serve as general\nclues for checking the existence of a CANN in neural systems. In order to solve the dynamics which\nis otherwise extremely complicated for a large recurrent network, we have developed a perturbative\nanalysis to simplify the dynamics of a CANN. Geometrically, it is equivalent to projecting the net-\nwork state on its dominant directions of the state space. This method works ef\ufb01ciently and may be\nwidely used in the study of CANNs.\n\nThe special structure of a CANN may have other applications in brain functions, for instance, the\nhighly structured state space of a CANN may provide a neural basis for encoding the topological\nrelationship of objects in a feature space, as suggested by recent psychophysical experiments [15,\n16]. It is likely that the distance between two memory states in a CANN de\ufb01nes the perceptual\nsimilarity between the two objects. Interestingly to note that the perceptual similarity measured by\nthe psychometric functions of human subjects in a categorization task has a similar logarithimic\nnature as that of reaction times in a CANN [17]. To study these issues theoretically and justify the\nexperimental \ufb01ndings, it is important for us to have analytic solutions of the state space and the\ndynamical behaviors of CANNs. We expect the analytical solution developed here will serve as a\nvaluable mathematical tool.\n\nThe tracking dynamics of a CANN has also been studied by other authors. In particular, Zhang\nproposed a mechanism of using asymmetrical recurrent interactions to drive the bump, so that the\nshape distortion is minimized [4]. Xie et al. further proposed a double ring network model to achieve\nthese asymmetrical interactions in the head-direction system [8]. It is not clear how this mechanism\ncan be generated in other neural systems. For instance, in the visual and hippocampal systems, it is\noften assumed that the bump movement is directly driven by external inputs (see, e.g., [5, 19, 20]),\nand the distortion of the bump is inevitable (indeed the bump distortions in [19, 20] are associated\nwith visual perception). The contribution of this study is on that we quantify how the distortion of\nthe bump shape affects the network tracking performance, and obtain a new \ufb01nding on the maximum\ntrackable speed of the network.\n\n7\n\n\fFinally, we would like to remark on the generality of the results in this work and their relationships to\nother studies in the literature. To pursue an analytical solution, we have used a divisive normalization\nto represent the inhibition effect. This is different from the Mexican-hat type of recurrent interactions\nused by many authors. For the latter, it is often dif\ufb01cult to get a closed-form of the network stationary\nstate. Amari used a Heaviside function to simplify the neural response, and obtained the box-\nshaped network stationary state [2]. However, since the Heaviside function is not differentiable, it is\ndif\ufb01cult to describe the tracking dynamics in the Amari model. Truncated sinusoidal functions have\nbeen used, but it is dif\ufb01cult to use them to describe general distortions of the bumps [3]. Here, by\nusing divisive normalization and the Gaussian-shaped recurrent interactions, we solve the network\nstationary states and the tracking dynamics analytically.\n\nOne may be concerned about the feasibility of the divisive normalization. First, we argue that neural\nsystems can have resources to implement this mechanism [7, 18]. Let us consider, for instance, a\nneural network, in which all excitatory neurons are connected to a pool of inhibitory neurons. Those\ninhibitory neurons have a time constant much shorter than that of excitatory neurons, and they inhibit\nthe activities of excitatory neurons in a uniform shunting way, thus achieving the effect of divisive\nnormalization. Second, and more importantly, the main conclusions of our work are qualitatively\nindpendent of the choice of the model. This is because our calculation is based on the fact that the\ndynamics of a CANN is dominated by the motion mode of position shift of the network state, and\nthis property is due to the translational invariance of the neuronal recurrent interactions, rather than\nthe inhibition mechanism. We have formally proved that for a CANN model, once the recurrent\ninteractions are translationally invariant, the interaction kernel has a unit eigenvalue with respect to\nthe position shift mode irrespective of the inhibition mechanism (to be reported elsewhere).\n\nThis work is partially supported by the Research Grant Council of Hong Kong (Grant No. HKUST\n603606 and HKUST 603607), BBSRC (BB/E017436/1) and the Royal Society.\n\nReferences\n\n[1] P. Dayan and L. Abbott, Theoretical Neuroscience: Computational and Mathematical Mod-\n\nelling of Neural Systems, (MIT Press, Cambridge MA, 2001).\n\n[2] S. Amari, Biological Cybernetics 27, 77 (1977).\n[3] R. Ben-Yishai, R. Lev Bar-Or and H. Sompolinsky, Proc. Natl. Acad. Sci. USA, 92 3844\n\n(1995).\n\n[4] K.-C. Zhang, J. Neurosicence 16, 2112 (1996).\n[5] A. Samsonovich and B. L. McNaughton, J. Neurosci. 17, 5900 (1997).\n[6] B. Ermentrout, Reports on Progress in Physics 61, 353 (1998).\n[7] S. Deneve, P. Latham and A. Pouget, Nature Neuroscience, 2, 740 (1999).\n[8] X. Xie, R. H. R. Hahnloser and S. Seung, Phys. Rev. E 66, 041902 (2002).\n[9] A. Renart, P. Song and X. Wang, Neuron 38, 473 (2003).\n[10] C. Brody, R. Romo and A. Kepecs, Current Opinion in Neurobiology, 13, 204-211 (2003)\n[11] S. Wu and S. Amari, Neural Computation 17, 2215 (2005)\n[12] B. Blumenfeld, S. Preminger, D. Sagi and M. Tsodyks, Neuron 52, 383 (2006).\n[13] C. Chow and S. Coombes, SIAM J. Appl. Dyn. Sys. 5, 552-574, 2006.\n[14] J. Hop\ufb01eld, Proc. Natl. Acad. Sci. USA, 79 2554 (1982).\n[15] J. Jastorff, Z. Kourtzi and M. Giese, J. Vision 6, 791 (2006).\n[16] A. B. A. Graf, F. A. Wichmann, H. H. B\u00a8ulthoff, and B. Sch\u00a8olkopf, Neural Computation 18,\n\n143 (2006).\n\n[17] J. Zhang, J. Mathematical Psychology 48, 409 (2004)\n[18] D. Heeger, J. Neurophysiology 70, 1885 (1993).\n[19] M. Berry II, I. Brivanlou, T. Jordon and M. Meister, Nature 398, 334 (1999).\n[20] Y. Fu, Y. Shen and Y. Dan, J. Neuroscience 21, 1 (2001).\n\n8\n\n\f", "award": [], "sourceid": 99, "authors": [{"given_name": "K.", "family_name": "Wong", "institution": null}, {"given_name": "Si", "family_name": "Wu", "institution": null}, {"given_name": "Chi", "family_name": "Fung", "institution": null}]}