{"title": "Quadratic-Type Lyapunov Functions for Competitive Neural Networks with Different Time-Scales", "book": "Advances in Neural Information Processing Systems", "page_first": 337, "page_last": 343, "abstract": null, "full_text": "Quadratic-Type Lyapunov Functions for \n\nCompetitive Neural Networks with \n\nDifferent Time-Scales \n\nAnke Meyer-Base \n\nInstitute of Technical Informatics \nTechnical University of Darmstadt \n\nDarmstadt, Germany 64283 \n\nAbstract \n\nThe dynamics of complex neural networks modelling the self(cid:173)\norganization process in cortical maps must include the aspects of \nlong and short-term memory. The behaviour of the network is such \ncharacterized by an equation of neural activity as a fast phenom(cid:173)\nenon and an equation of synaptic modification as a slow part of the \nneural system. We present a quadratic-type Lyapunov function for \nthe flow of a competitive neural system with fast and slow dynamic \nvariables. We also show the consequences of the stability analysis \non the neural net parameters. \n\n1 \n\nINTRODUCTION \n\nThis paper investigates a special class of laterally inhibited neural networks. In \nparticular, we have examined the dynamics of a restricted class of laterally inhibited \nneural networks from a rigorous analytic standpoint. \nThe network models for retinotopic and somatotopic cortical maps are usually com(cid:173)\nposed of several layers of neurons from sensory receptors to cortical units, with \nfeedforward excitations between the layers and lateral (or recurrent) connection \nwithin the layer. Standard techniques include (1) Hebbian rule and its variations \nfor modifying synaptic efficacies, (2) lateral inhibition for establishing topographical \norganization of the cortex, and (3) adiabatic approximation in decoupling the dy(cid:173)\nnamics of relaxation (which is on the fast time scale) and the dynamics of learning \n(which is on the slow time scale) of the network . However, in most cases, only com(cid:173)\nputer simulation results were obtained and therefore provided limited mathematical \nunderstanding of the self-organizating neural response fields. \nThe networks under study model the dynamics of both the neural activity levels, \n\n\f338 \n\nA. MEYER-BASE \n\nthe short-term memory (STM), and the dynamics of synaptic modifications, the \nlong-term memory (LTM). The actual network models under consideration may be \nconsidered extensions of Grossberg's shunting network [Gr076] or Amari's model \nfor primitive neuronal competition [Ama82]. These earlier networks are considered \npools of mutually inhibitory neurons with fixed synaptic connections. Our results \nextended these earlier studies to systems where the synapses can be modified by \nexternal stimuli. The dynamics of competitive systems may be extremely complex, \nexhibiting convergence to point attractors and periodic attractors. For networks \nwhich model only the dynamic of the neural activity levels Cohen and Grossberg \n[CG83] found a Lyapunov function as a necessary condition for the convergence \nbehavior to point attractors. \n\nIn this paper we apply the results of the theory of Lyapunov functions for singularly \nperturbed systems on large-scale neural networks, which have two types of state \nvariables (LTM and STM) describing the slow and the fast dynamics of the system. \nSo we can find a Lyapunov function for the neural system with different time-scales \nand give a design concept of storing desired pattern as stable equilibrium points. \n\n2 THE CLASS OF NEURAL NETWORKS WITH \n\nDIFFERENT TIME-SCALES \n\nThis section defines the network of differential equations characterizing laterally \ninhibited neural networks. We consider a laterally inhibited network with a deter(cid:173)\nministic signal Hebbian learning law [Heb49] and is similar to the spatiotemporal \nsystem of Amari [Ama83] . \nThe general neural network equations describe the temporal evolution of the STM \n(activity modification) and LTM states (synaptic modification). For the jth neuron \nof aN-neuron network these equations are: \n\nXj = -ajxj + L D ij!(Xi ) + BjSj \n\nN \n\ni=l \n\n(1) \n\n(2) \n\nwhere Xj is the current activity level, aj is the time constant of the neuron, Bj is \nthe contribution of the external stimulus term, !(Xi) is the neuron's output, D ij is \nthe . lateral inhibition term and Yi is the external stimulus. The dynamic variable \nSj represents the synaptic modification state and lyl21 is defined as lyl2 = yTy. \nWe will assume that the input stimuli are normalized vectors of unit magnitude \nlyl2 = 1. These systems will be subject to our analysis considerations regarding the \nstability of their equilibrium points. \n\n3 ASYMPTOTIC STABILITY OF NEURAL \n\nNETWORKS WITH DIFFERENT TIME-SCALES \n\nWe show in this section that it is possible to determine the asymptotic stability of \nthis class of neural networks interpreting them as nonlinear singularly perturbed \nsystems. While singular perturbation theory, a traditional tool of fluid dynamics \nand nonlinear mechanics, embraces a wide variety of dynamic phenomena possesing \nslow and fast modes, we show that singular perturbations are present in many \n\n\fQuadratic-type Lyapunov Functions for Competitive Neural Networks \n\n339 \n\nneurodynamical problems. In this sense we apply in this paper the results of this \nvaluable analysis tool on the dynamics of laterally inhibited networks. \n\nIn [SK84] is shown that a quadratic-type Lyapunov function for a singularly per(cid:173)\nturbed system is obtained as a weighted sum of quadratic-type Lyapunov functions \nof two lower order systems: the so-called reduced and the boundary-layer systems. \nAssuming that each of the two systems is asymptotically stable and has a Lyapunov \nfunction, conditions are derived to guarantee that, for a sufficiently small pertur(cid:173)\nbation parameter, asymptotic stability of the singularly perturbed system can be \nestablished by means of a Lyapunov function which is composed as a weighted sum \nof the Lyapunov functions of the reduced and boundary-layer systems. \n\nAdopting the notations from [SK84] we will consider the singularly perturbed system 2 \n\nx = f(x, y) x E Bx C R n \n\n(3) \n\n(4) \n\nWe assume that, in Bx and By, the origin (x = y = 0) is the unique equilibrium point \n\nand (3) and (4) has a unique solution. A reduced system is defined by setting c = \u00b0 in (3) \n\nand (4) to obtain \n\nx = f(x,y) \n\nO=g(x,y,O) \n\n(5) \n\n(6) \n\nAssuming that in Bx and By, (6) has a unique root y = h(x), the reduced system is \nrewritten as \n\nA boundary-layer system is defined as \n\nx = f(x, h(x)) = fr(x) \n\nay aT = g(X,y(T),O) \n\n(7) \n\n(8) \n\nwhere T = tic is a stretching time scale. In (8) the vector x E R n is treated as a fixed \nunknown parameter that takes values in Bx. The aim is to establish the stability properties \nof the singularly perturbed system (3) and (4), for small c, from those of the reduced system \n(7) and the boundary-layer system (8). The Lyapunov functions for system 7 and 8 are of \nquadratic-type. In [SK84] it is shown that under mild assumptions, for sufficiently small \nc, any weighted sum of the Lyapunov functions of the reduced and boundary-layer system \nis a quadratic-type Lyapunov function for the singularly perturbed system (3) and (4). \n\nThe necessary assumptions are stated now [SK84]: \n\n1. The reduced system (7) has a Lyapunov function V : R n -+ R+ such that for all \n\nxE Bx \n\n(9) \nwhere t/I(x) is a scalar-valued function of x that vanishes at x = 0 and is different \nfrom zero for all other x E Bx. This condition guarantees that x = 0 is an \nasymptotically stable equilibrium point of the reduced system (7). \n\n2The symbol Bx indicates a closed sphere centered at x = OJ By is defined in the same \n\nway. \n\n\f340 \n\nA. MEYER-BASE \n\n2. The boundary-layer system (8) has a Lyapunov function W(x, y) : R n x R m -> \n\nR+ such that for all x E Bx and y E By \n\n('\\7yW(X,y)fg(X,y , O)::;-0:2\u00a2?(y-h(x)) 0:2>0 \n\n(10) \n\nwhere \u00a2>(y - h(x)) is a scalar-valued function (y - h(x)) E R m that vanishes \nat y = h(x) and is different from zero for all other x E Bx and y E By. This \ncondition guarantees that y = h(x) is an asymptotically stable equilibrium point \nof the boundary-layer system (8). \n\n3. The following three inequalities hold \"Ix E Bx and Vy E By: \n\na.) \n\nb.) \n\nc.) \n\n('\\7 ,..W(x, y)ff(x, y) ::; C1\u00a2>2(y - h(x)) + C21/J(X)\u00a2>(Y - h(x)) \n\n(11) \n\n('\\7,.. V(x)f[f(x, y) - f(x, h(x))] ::; /311/J(X)\u00a2>(y - h(x)) \n\n(12) \n\n('\\7yW(x,y)f[g(x,y,()-g(x,y,O)] \n\n< \n\n(K1\u00a2>2(y - h(x)) + (K21/J(X)\u00a2>(Y - h(x)) (13) \n\nThe constants C1, C2, /31 , K1 and K2 are nonnegative. The inequalities above determine the \npermissible interaction between the slow and fast variables. They are basically smoothness \nrequirements of f and g. \nAfter these introductory remarks the stability criterion is now stated: \n\nTheorem: Suppose that conditions 1-3 hold; let d be a positive number such that \n0< d < 1, and let c*(d) be the positive number given by \n\n(14) \n\nwhere Ih = f{2 + G2, 'Y = f{l + Gl , then for all c < c*(d), the origin (x = y = 0) \nis an asymptotically stable equilibrium point of (3) and (.0 and \n\nv(x, y) = (1 - d)V(x) + dW(x, y) \n\n(15) \n\nis a Lyapunov function of (3) and (4). \nIf we put c = t as a global neural time constant in equation (1) then we have \nto determine two Lyapunov functions: one for the boundary-layer system and the \nother for the reduced-order system. \n\nIn [CG83] is mentioned a global Lyapunov function for a competitive neural network \nwith only an activation dynamics. \n\nunder the constraints: mij = mji, ai(xi) 2: 0, fj(xj) 2: O. \nThis Lyapunov-function can be t\u00b7aken as one for the boundary-layer system (STM(cid:173)\nequation) , if the LTM contribution Si is considered as a fixed unknown parameter: \n\n(16) \n\n\fQuadratic-type Lyapunov Functions for Competitive Neural Networks \n\n341 \n\nN r i \n\nW(x, S) = L 10 aj((j)!;((j)d(j-L BjSj 10 \n\nN \n\n(Xi \n\n1 N \n\nf;((j)d(j-2 L Dij!i(Xj)!k(Xk) \n\nj=l 0 \n\nj=l \n\n0 \n\nj=l \n\n(17) \nFor the reduced-order system (LTM- equation) we can take as a Lyapunov-function: \n\nV(S) = ~STS = L S; \n\nN \n\ni=l \n\n(18) \n\nThe Lyapunov-function for the coupled STM and LTM dynamics is the sum of the \ntwo Lyapunov-function: \n\nvex, S) = (1 - d)V(S) + dW(x, S) \n\n(19) \n\n4 DESIGN OF STABLE COMPETITIVE NEURAL \n\nNETWORKS \n\nCompetitive neural networks with learning rules have moving equilibria during the \nlearning process. The concept of asymptotic stability derived from matrix pertur(cid:173)\nbation theory can capture this phenomenon. \n\nWe design in this section a competitive neural network that is able to store a desired \npattern as a stable equilibrium. \n\nThe theoretical implications are illustrated in an example of a two neuron network. \nExample: Let N = 2, ai = A, Bj = B, Dii = a > 0, Dij = -(3 < 0 and the \nnonlinearity be a linear function f(xj) = Xj in equations (1) and (2). \nWe get for the boundary-layer system: \n\nN \n\nXj = -Axj + L Dijf(xd + BSj \n\ni=l \n\nand for the reduced-order system: \n\n. \nS\u00b7 = S\u00b7[-- -1] - - -\nA-a \nJ \n\nlA-a \n\nB \n\nC \n\nThen we get for the Lyapunov-functions: \n\nand \n\n(20) \n\n(21) \n\n(22) \n\n(23) \n\n\f342 \n\n[JJ \nOJ \n.IJ \nlIS \n.IJ \n[JJ \n\n~ \n\nU) \n\nA. MEYER-BASE \n\n-0.2 \n\n. \\ \n\\ \n\n-0.4 \n\n-0.6 \n\n/ \n\n-0.8 \\J \n\n-1 \n\n-1.2 ~ __ ~ __ ~ ____ ~ __ ~ __ -L __ ~~ __ ~ __ -L __ ~~~ \n\no \n\n1 \n\n2 \n\n3 \n\n4 \n\ntime in msec \n\n5 \n\n6 \n\n7 \n\n8 \n\n9 \n\n10 \n\nFigure 1: Time histories of the neural network with the origin as an equilibrium \npoint: STM states. \n\nFor the nonnegative constants we get: al = 1 - A~a' a2 = (A - a)2, Cl = 'Y = - B, \nwith B < 0 , and C2 = i3l = i32 = 1 and I B , A-a> 0 \nand B < o. \nThe above impications can be interpreted as follows: To achieve a stable equilibrium \npoint (0,0) we should have a negative contribution of the external stimulus term \nand the sum of the excitatory and inhibitory contribution of the neurons should \nbe less than the time constant of a neuron. An evolution of the trajectories of the \nSTM and LTM states for a two neuron system is shown in figure 1 and 2. The \nSTM states exhibit first an oscillation from the expected equilibrium point, while \nthe LTM states reach monotonically the equilibrium point. We can see from the \npictures that the equilibrium point (0,0) is reached after 5 msec by the STM- and \nLTM-states. \nChoosing B = -5, A = 1 and a = 0.5 we obtain for f*(d) : f*(d) = \nFrom the above formula we can see that f*(d) has a maximum at d = d* = 0.5. \n\n55+ .d(1-d) \n\nll.of . \n\n5 CONCLUSIONS \n\nWe presented in this paper a quadratic-type Lyapunov function for analyzing the \nstability of equilibrium points of competitive neural networks with fast and slow \ndynamics. This global stability analysis method is interpreting neural networks \nas nonlinear singularly perturbed systems. The equilibrium point is constrained \nto a neighborhood of (0,0). This technique supposes a monotonically increasing \nnon-linearity and a symmetric lateral inhibition matrix. The learning rule is a \ndeterministic Hebbian. This method gives an upper bound on the perturbation \n\n\fQuadratic-type Lyapunov Functions for Competitive Neural Networks \n\n343 \n\n0.6 ~--~--~----~--~--~----~--'---~--~r---. \n\nIII \n