{"title": "Dynamics of Analog Neural Networks with Time Delay", "book": "Advances in Neural Information Processing Systems", "page_first": 568, "page_last": 576, "abstract": null, "full_text": "568 \n\nDYNAMICS OF ANALOG NEURAL \nNETWORKS WITH TIME DELAY \n\nC.M. Marcus and RM. Westervelt \n\nDivision of Applied Sciences and Department of Physics \n\nHarvard University, Cambridge Massachusetts 02138 \n\nABSTRACT \n\nA time delay in the response of the neurons in a network can \ninduce sustained oscillation and chaos. We present a stability \ncriterion based on local stability analysis to prevent sustained \noscillation in symmetric delay networks, and show an \nexample of chaotic dynamics in a non-symmetric delay \nnetwork. \n\nL INTRODUCTION \n\nUnderstanding how time delay affects the dynamics of neural networks is important for \ntwo reasons: First, some degree of time delay is intrinsic to any physically realized \nnetwork, both in biological neural systems and in electronic artificial neural networks. \nAs we will show, it is not obvious what constitutes a \"small\" (i.e. ignorable) delay \nwhich will not qualitatively change the network dynamics. For some network \nconfigurations, delay much smaller than the intrinsic relaxation time of the network can \ninduce collective oscillatory behavior not predicted by mathematical models which ignore \ndelay. These oscillations mayor may not be desirable; in either case, one should \nunderstand when and how new dynamics can appear. The second reason to study time \ndelay is for its intentional use in parallel computation. The dynamics of neural networks \nwhich always converge to fixed points are now fairly well understood. Several neural \nnetwork models have appeared recently which use time delay to produce dynamic \ncomputation such as associative recall of sequences [Kleinfeld,1986; Sompolinsky and \nKanter, 1986]. It has also been suggested that time delay produces an effective noise in \nthe network dynamics which can yield improved recall of memories [Conwell, 1987] \nFinally, to the extent that neural networks research is inspired by biological systems, the \nknown presence of time delays in a many real neural systems suggests their usefulness \nin parallel computation. \n\nIn this paper we will show how time delay in an analog neural network can produce \nsustained oscillation and chaos. In section 2 we consider the case of a symmetrically \nconnected network. It is known [Cohen and Grossberg, 1983; Hopfield, 1984] that in the \nabsence of time delay a symmetric network will always converge to a fixed point \nattractor. We show that adding a fixed delay to the response of each neuron will produce \nsustained oscillation when the magnitude of the delay exceeds a critical value, which \ndepends on the neuron gain and the network connection topology. We then analyze the \n\n\fDynamics of Analog Neural Networks with Time Delay \n\n569 \n\nall-inhibitory and symmetric ring topologies as examples. In section 3, we discuss \nchaotic dynamics in asymmetric neural networks, and give an example of a small (N=3) \nnetwork which shows delay-induced chaos. The analytical results presented here are \nsupported by numerical simulations and experiments performed on a small electronic \nneural network with controllable time. A detailed derivation of the stability results for \nthe symmetric network is given in [Marcus and Westervelt, 1989], and the electronic \ncircuit used is described in described [Marcus and Westervelt, 1988]. \n\nII. STABILITY OF SYMMETRIC NETWORKS WITH DELAY \n\nThe dynamical system we consider describes an electronic circuit of N saturable \namplifiers (\"neurons\") coupled by a resistive interconnection matrix. The neurons do not \nrespond to an input voltage ui instantaneously, but produce an output after a delay, \nwhich we take to be the same for all neurons. The neuron input voltages evolve \naccording to the following equations: \n\niI.(t) = -u.(t) + L J .. f(u.(t-t\u00bb. \n\n1 \n\n1 \n\nN \nj = 1 \n\nIJ \n\nJ \n\n(1 ) \n\nThe transfer function for each neuron is taken to be an identical sigmoidal function feu) \nwith a maximum slope df/du = ~ at u = O. The unit of time in these equations has been \nscaled to the characteristic network relaxation time, thus t can be thought of as the ratio \nof delay time to relaxation time. The symmetric interconnection matrix Jij describes the \nconductance between neurons i and j is normalized to satisfy LjlJijl = 1 for all i. This \nnormalization assumes that each neuron sees the same conductance at its input [Marcus \nand Westervelt, 1989]. The initial conditions for this system are a set of N continuous \nfunctions defined on the interval -t ~ t ~ O. We take each initial function to be constant \nover that interval, though possibly different for different i. We find numerically that the \nresults do not depend on the form of the initial functions. \n\nLinear Stability Analysis at Low Gain \nStUdying the stability of the fixed point at the origin (ui = 0 for all i) is useful for \nunderstanding the source of delay-induced sustained oscillation and will lead to a low-gain \nstability criterion for symmetric networks. It is important to realize however, that for \nthe system (1) with a sigmoidal nonlinearity, if the origin is stable then it is the unique \nattractor, which makes for rather uninteresting dynamics. Thus the origin will almost \ncertainly be unstable in any useful configuration. Linear stability analysis about the \norigin will show that at 't = 0, as the gain ~ is increased, the origin always loses \nstability by a type of bifurcation which only produces other fixed points, but for 't > 0 \nan alternative type of bifurcation of the origin can occur which produces the sustained \noscillatory modes. The stability criterion derived insures that this alternate bifurcation -\na Hopf bifurcation - does not occur. \n\nThe natural coordinate system for the linearized version of (1) is the set of N \neigenvectors of the connection matrix Jij' defined as xi(t), i=I, .. N. In terms of the xi(t), \n\n\f570 \n\nMarcus and Westervelt \n\nthe linearized system can be written \n\ni .( t) = - x .( t) + ~ A. x.( t - t ) \n\nI I I \n\n1 \n\n(2) \n\nwhere ~ is the neuron gain and Ai (i=I, .. N) are the eigenvalues of Jij' In general, these \neigenvalues have both real and imaginary parts; for Jij = Jji the A' are purely real. \nAssuming exponential time evolution of the form xi(t) = Xi(O)e~it, where si is a \ncomplex characteristic exponent, yields a set of N transcendental characteristic equations: \n(si + l)esit = ~Ai' The condition for stability of the origin, Re(si) < 0 for all i, and the \ncharacteristic equations can be used to specify a stability region in the complex plane of \neigenvalues, as illustrated in Fig. (la). When all eigenvalues of Jij are within the \nstability region, the origin is stable. For t = 0, the stability region is defined by \nRe(A) < lI~, giving a half-plane stability condition familiar from ordinary differential \nequations. For t > 0, we define the border of the stability region A(e) at an angle e \nfrom the Re(A) axis as the radial distance from the point A = 0 to the frrst point (Le. \nsmallest value of A(e\u00bb which satisfies the characteristic equation for purely imaginary \ncharacteristic exponent Sj i5 iroj. The delay-dependent value of A(e) is given by \n\nA(e) = ~.J ro2 + 1 \n\n; \n\nro = - tan (rot - e) \n\n(3) \n\nwhere ro is in the range (e-1tI2) ~ClYt ~ e, modulo 21t. \n\n(a) \n\nIm(A) \n\nA(O;t=l) \n\n--'\"\"\"\"\"'''-\n\n~~--~--~----~----~ \n\nReO.) \n\n(b) 100 \n\nJ3A \n10 \n\n0.1 \n\n't \n\n1 \n\n10 \n\nFigure 1. (a) Regions of Stability in the Complex Plane of Eigenvalues A of the \n\nConnection Matrix Jij' for't = 0,1,00. (b) Where Stability Region Crosses the Real-A \n\nAxis in the Negative Half Plane. \n\nNotice that for nonzero delay the stability region closes on the Re(A) axis in the negative \nhalf-plane. It is therefore possible for negative real eigenvalues to induce an instability \nof the origin. Specifically, if the minimum eigenvalue of the symmetric matrix Jij is \nmore negative than -A(e = 1t) then the origin is unstable. We define this \"back door\" \nto the stability region along the real axis as A > 0, dropping the argument e = 1t. A is \ninversely proportional to the gain ~ and depends on delay as shown in Fig. (lb). For \nlarge and small delay, A can be approximated as an explicit function of delay and gain: \n\n\fDynamics of Analog Neural Networks with Time Delay \n\n571 \n\nA _ \n\nt\u00ab l \n\nt> > 1 \n\n(4 a) \n\n(4b) \n\nIn the infinite-delay limit, the delay-differential system (1) is equivalent to an iterated \nmap or parallel-update network of the form ui(t+l) = 1] Jij f(uj(t\u00bb where t is a discrete \niteration index. In this limit, the stability region is circular, corresponding to the fixed \npoint stability condition for the iterated map system. \n\nConsider the stability of the origin in a symmetrically connected delay system (1) as the \nneuron gain ~ is increased from zero to a large value. A bifurcation of the origin will \noccur when the maximum eigenvalue Amax > 0 of Jij becomes larger than l/~ or when \nthe minimum eigenvalue Amin < 0 becomes more negative than -A = _~-I(ro2+1)lJ2, \nwhere ro = -tan(rot), [1CI2 < ro < x]. Which bifurcation occurs first depends on the \ndelay and the eigenvalues of Jr. The bifurcation at Amax = ~-1 is a pitchfork (as it is \nfor t = 0) corresponding to a ctaracteristic exponent si crossing into the positive real \nhalf plane along the real axis. This bifurcation creates a pair of fixed points along the \neigenvector Xi associated with that eigenvalue. These fixed points constitute a single \nmemory state of the network. The bifurcation at Amin = - A corresponds to a Hopf \nbifurcation [Marsden and McCracken, 1976] , where a pair of characteristic exponents pass \ninto the real half plane with imaginary components \u00b1ro where ro = -tan(rot), [x/2 < ro \n< xl. This bifurcation, not present at t = 0, creates an oscillatory attractor along the \neigenvector associated with ~in' \n\nA simple stability criterion can be constructed by requiring that the most negative \neigenvalue of the (symmetric) connection matrix not be more negative than -A. Because \nA is always larger than its small-delay limit 7tI(2t~), the criterion can be stated as a \nlimit on the size on the delay (in units of the network relaxation time.) \n\nt<-\n\nx \n\n2~A . mIn \n\n=> no sustained oscillation. \n\n(5) \n\nLinear stability analysis does not prove global stability, but the criterion (5) is supported \nby considerable numerical and eXferimental evidence [Marcus and Westervelt, 1989]. \nFor long delays, where A == W ,linear stability analysis suggests that sustained \noscillation will not exist as long as _~-1 < Amin' In the infinite-delay limit, it can be \nshown that this condition insures global stability in the discrete-time parallel-update \nnetwork. [Marcus and Westervelt, to appear]. \n\nAt large gain, Eq. (5) does not provide a useful stability criterion because the delay \nrequired for stability tends to zero as ~ ~ 00. The nonlinearity of the transfer function \nbecomes important at large gain and stable, fixed-point-only dynamics are found at large \ngain and nonzero delay, indicating that Eq. (5) is overly conservative at large gain. To \nunderstand this, we must include the nonlinearity and consider the stability of the \noscillatory modes themselves. This is described in the next section. \n\n\f572 \n\nMarcus and Westervelt \n\nStability in the Large-Gain Limit \nWe now analyze the oscillatory mode at large gain for the particular case of coherent \noscillation. We find a second stability criterion which predicts a gain-independent critical \ndelay below which all initial conditions lead to fixed points. This result complements \nthe low gain result of the previous section for this class of network; experimentally and \nnumerically we find excellent agree in both regimes, with a cross-over at the value of \ngain where fixed points appear away from the origin, p = lIAmax. \nIn considering only coherent oscillation, we not only assume that Iij is symmetric but \nthat its maximum and minimum eigenvalues satisfy 0 < Amax < -Amin and that the \neigenvector associated with Amin points in a coherent direction, defined to be along any \nof the 2N vectors of the form (\u00b1I,\u00b1l,\u00b1I, ... ) in the ui basis. For this case, we find that \nin the limit of infinite gain, where the nonlinearity is of the form f(u) = sgn(u), multiple \nfixed point attractors coexist with the oscillatory attractor and that the size of the basin \nof attraction for the oscillatory mode varies with the delay [Marcus and Westervelt, \n1988]. At a critical value of delay 'tcrit the basin of attraction for oscillation vanishes \nand the oscillatory mode loses stability. In [Marcus and Westervelt, 1989] we show: \n\n't \n\ncnt \n\n. = -In( 1 + A max / A \n\n. ) \nmIn \n\n(6) \n\nFor delays less than this critical value, all initial states lead to stable fIXed points. \n\nNotice that the critical delay for coherent oscillation diverges as IAmax/Aminl ~ 1-. \nExperimentally and numerically we find that this prediction has more general \napplicability: None of the symmetric networks investigated which satisfied \nIAmax/Aminl ~ 1 (and Amax > 0) showed sustained oscillation for 't < -10. This \nobservation is a useful criterion for electronic circuit design, where single-device delays \nare generally shorter than the circuit relaxation time ('t < 1), but only the case of \ncoherent oscillation is supported by analysis. \n\nExamples \nAs a first example, we consider the fully-connected all-inhibitory network, Eq. (1) with \nIii = (N-IrI(~ij - 1). This matrix has N-I degenerate eigenvalues at +lI(N-I) and a \nsmgle eigenvalue at -1. A similar network configuration (with delays) has been studied \nas a model of lateral inhibition in the eye of the horseshoe crab, Limulus [Coleman and \nRenninger,I975,I976; Hadeler and Tomiuk,I977; anderHeiden, 1980]. Previous analysis \nof sustained oscillation in this system has assumed a coherent form for the oscillatory \nsolution, which reduces the problem to a single scalar delay-differential equation. \nHowever, by constraining the solution to lie on along the coherent direction, the \ninstability of the oscillatory mode discussed above is not seen. Because of this \nassumption, fixed-point-only dynamics in the large-gain limit with finite delay are not \npredicted by previous treatments, to our knowledge. \n\n\fDynamics of Analog Neural Networks with Time Delay \n\n573 \n\nThe behavior of the network at various values of gain and delay are illustrated in Fig.2 \nfor the particular case of N=3. The four regions labeled A,B,C and D characterize the \nbehavior for all N. At low gain (~ < N-1) the origin is the unique attractor for small \ndelay (region A) and undergoes a Hopf bifurcation at to sustained coherent oscillation at \n't - 7t(~2_1)-112 for large delay (region B). At ~ = N-1 fixed points away from the origin \nappear. In addition to these fixed points, an oscillatory attractor exists at large gain for \n't > In [(N-1)/(N-2)] (== liN for large N) (region C). Sustained oscillation does not \nexist below this critical delay (region D). \n\nc \n\nD \n\nA \n\n10 \n\n100 \n\nFigure 2. Stability Diagram for the All-Inhibitory Delay Network for the Case N = 3. \n\nSee Text for a Description of A,B,C and D. \n\nAs a second example, we consider a ring of delayed neurons. We allow the symmetric \nconnections to be of either sign - that is, connections between neighboring pairs can be \nmutually excitatory or inhibitory - but are all the same strength. The eigenvalues for the \nsymmetric ring of size N are Ak = cos(27t(k+
O.97ms both periodic and chaotic attractors are found. \n\n1.0-\n\nV2 -\n\no -\n\n1.0-\n\nV2 -\n\no -\n\n't \n\nA \n\n0 \n\nVI \n\n1.0 \n\n0 \n\nVI \n\n1.0 \n\nFigure 4. Period Doubling to Chaos as the Delay in Neuron 1 is Increased. \n\nChaos in the network of Fig.4 is closely related to a well-known chaotic delay(cid:173)\ndifferential equation with a noninvertible feedback term [Mackey and Glass,1977]. The \nnoninvertible or \"mixed\" feedback necessary to produce chaos in the Mackey-Glass \nequation is achieved in the neural network - which has only monotone transfer \nfunctions - by using asymmetric connections. \n\nThis association between asymmetry and noninvertible feedback suggests that \nasymmetric connections may be necessary to produce chaotic dynamics in neural \nnetworks, even when time delay is present. This conjecture is further supported by \nconsidering the two limiting cases of zero delay and infinite delay, neither of which show \nchaotic dynamics for symmetric connections. \n\nIV. CONCLUSION AND OPEN PROBLEMS \n\nWe have considered the effects of delayed response in a continuous-time neural network. \nWe find that when the delay of each neuron exceeds a critical value sustained oscillatory \nmodes appear in a symmetric network. Stability analysis yields a design criterion for \nbuilding stable electronic neural networks, but these results can also be used to created \ndesired oscillatory modes in delay networks. For example, a variation of the Hebb rule \n[Hebb, 1949], created by simply taking the negative of a Hebb matrix, will give \nnegative real eigenvalues corresponding to programed oscillatory patterns. Analyzing the \nstorage capacities and other properties of neural networks with dynamic attractors remain \n\n\f576 \n\nMarcus and Westervelt \n\nchallenging problems [see, e.g. Gutfreund and Mezard, 1988]. \n\nIn analyzing the stability of delay systems, we have assumed that the delays and gains of \nall neurons are identical. This is quite restrictive and is certainly not justified from a \nbiological viewpoint. It would be interesting to study the effects of a wide range of \ndelays in both symmetric and non-symmetric neural networks. It is possible, for \nexample, that the coherent oscillation described above will not persist when the delays \nare widely distributed. \n\nAcknowledgements \n\nOne of us (CMM) acknowledges support as an AT&T Bell Laboratories Scholar. \nResearch supported in part by JSEP contract NOOOI4-84-K-0465. \n\nReferences \n\nS. Amari, 1971, Proc. IEEE, 59, 35. \nS. Amari, 1972, IEEE Trans. SMC-2, 643. \nU. an der Heiden, 1980, Analysis of Neural Networks, Vol. 35 of Lecture Notes in \n\nBiomathematics (Springer, New York). \n\nK.L. Babcock and R.M. Westervelt, 1987, Physica 28D, 305. \nM.A. Cohen and S. Grossberg, 1983, IEEE Trans. SMC-13, 815. \nA.H. Cohen, S. Rossignol and S. Grillner, 1988, Neural Control of Rhythmic Motion, \n\n(Wiley, New York). \n\nB.D. Coleman and G.H. Renninger, 1975, J. Theor. BioI. 51, 243. \nB.D. Coleman and G.H. Renninger, 1976, SIAM J. Appl. Math. 31, 111. \nP. R. Conwell, 1987, in Proc. of IEEE First Int. Con! on Neural Networks.III-95. \nH. Gutfreund, J.D. Reg~r and A.P. Young, 1988, J. Phys. A, 21, 2775. \nH. Gutfreund and M. Mezard, 1988, Phys. Rev. Lett. 61, 235. \nK.P. Hadeler and J. Tomiuk, 1977, Arch. Rat. Mech. Anal. 65,87. \nD.O. Hebb, 1949, The Organization of B,ehavior (Wiley, New York). \nJ.1. Hopfield, 1984, Proc. Nat. Acad. Sci. USA 81, 3008. \nD. Kleinfeld, 1984, Proc. Nat. Acad. Sci. USA 83,9469. \nK.E. KUrten and J.W. Clark, 1986, Phys. Lett. 114A, 413. \nM.C. Mackey and L. Glass, 1977, Science 197, 287. \nC.M. Marcus and R.M. Westervelt, 1988, in: Proc. IEEE Con[ on Neural Info. Proc. \n\nSyst .\u2022 Denver. CO. 1987, (American Institute of Physics, New York). \n\nC.M. Marcus and R.M. Westervelt, 1989, Phys. Rev. A 39, 347. \nJ .E. Marsden and M. McCracken, The Hopf Bifurcation and its Applications, (Springer-\n\nVerlag, New York). \n\nU. Riedel, R. KUhn, and J. L. van Hemmen, 1988, Phys. Rev. A 38, 1105. \nS. Shinomoto, 1986, Prog. Theor. Phys. 75, 1313. \nH. Sompolinsky and I. Kanter, 1986, Phys. Rev. Lett. 57, 259. \nH. Sompolinsky, A. erisanti and H.I. Sommers, 1988, Phys. Rev. Lett. 61, 259. \nG. Toulouse, 1977, Commun. Phys. 2, 115. \n\n\f", "award": [], "sourceid": 111, "authors": [{"given_name": "Charles", "family_name": "Marcus", "institution": null}, {"given_name": "R.", "family_name": "Westervelt", "institution": null}]}