{"title": "Stability Results for Neural Networks", "book": "Neural Information Processing Systems", "page_first": 554, "page_last": 563, "abstract": null, "full_text": "554 \n\nSTABILITY RESULTS FOR NEURAL NETWORKS \n\nA. N. Michell, J. A. FarreUi , and W. Porod2 \n\nDepartment of Electrical and Computer Engineering \n\nUniversity of Notre Dame \n\nNotre Dame, IN 46556 \n\nABSTRACT \n\nIn the present paper we survey and utilize results from the qualitative theory of large \nscale interconnected dynamical systems in order to develop a qualitative theory for the \nHopfield model of neural networks. In our approach we view such networks as an inter(cid:173)\nconnection of many single neurons. Our results are phrased in terms of the qualitative \nproperties of the individual neurons and in terms of the properties of the interconnecting \nstructure of the neural networks. Aspects of neural networks which we address include \nasymptotic stability, exponential stability, and instability of an equilibrium; estimates \nof trajectory bounds; estimates of the domain of attraction of an asymptotically stable \nequilibrium; and stability of neural networks under structural perturbations. \n\nINTRODUCTION \n\nIn recent years, neural networks have attracted considerable attention as candidates \nfor novel computational systemsl- 3 . These types of large-scale dynamical systems, in \nanalogy to biological structures, take advantage of distributed information processing \nand their inherent potential for parallel computation4,5. Clearly, the design of such \nneural-network-based computational systems entails a detailed understanding of the \ndynamics of large-scale dynamical systems. In particular, the stability and instability \nproperties of the various equilibrium points in such networks are of interest, as well \nas the extent of associated domains of attraction (basins of attraction) and trajectory \nbounds. \n\nIn the present paper, we apply and survey results from the qualitative theory oflarge \nscale interconnected dynamical systems6 - 9 in order to develop a qualitative theory for \nneural networks. We will concentrate here on the popular Hopfield model3 , however, \nthis type of analysis may also be applied to other models. In particular, we will address \nthe following problems: (i) determine the stability properties of a given equilibrium \npoint; (ii) given that a specific equilibrium point of a neural network is asymptotically \nstable, establish an estimate for its domain of attraction; (iii) given a set of initial condi(cid:173)\ntions and external inputs, establish estimates for corresponding trajectory bounds; (iv) \ngive conditions for the instability of a given equilibrium point; (v) investigate stability \nproperties under structural perturbations. The present paper contains local results. A \nmore detailed treatment of local stability results can be found in Ref. 10, whereas global \nresults are contained in Ref. 1l. \n\nIn arriving at the results of the present paper, we make use of the method of anal(cid:173)\n\nysis advanced in Ref. 6. Specifically, we view high dimensional neural network as an \n\nIThe work of A. N. Michel and J. A. Farrell was supported by NSF under grant ECS84-19918. \n2The work of W. Porod was supported by ONR under grant NOOOI4-86-K-0506. \n\n\u00a9 American Institute of Physics 1988 \n\n\f555 \n\ninterconnection of individual subsystems (neurons). This interconnected systems view(cid:173)\npoint makes our results distinct from others derived in the literature1,12. Our results \nare phrased in terms of the qualitative properties of the free subsystems (individual \nneurons, disconnected from the network) and in terms of the properties of the intercon(cid:173)\nnecting structure of the neural network. As such, these results may constitute useful \ndesign tools. This approach makes possible the systematic analysis of high dimensional \ncomplex systems and it frequently enables one to circumvent difficulties encountered in \nthe analysis of such systems by conventional methods. \n\nThe structure of this paper is as follows. We start out by defining the Hopfield \nmodel and we then introduce the interconnected systems viewpoint. We then present \nrepresentative stability results, including estimates of trajectory bounds and of domains \nof attraction, results for instability, and conditions for stability under structural pertur(cid:173)\nbations. Finally, we present concluding remarks. \n\nTHE HOPFIELD MODEL FOR NEURAL NETWORKS \n\nIn the present paper we consider neural networks of the Hopfield type3 \u2022 Such systems \n\ncan be represented by equations of the form \n\nUi = ..... biUi + I:Aij Gj(Uj) + Ui(t), for i = 1, ... ,N, \n\nN \n\n(1) \n\nj=1 \n\nwhere Aij = *\"Ui(t) = l~g) and bi = *.. As usual, Ci > O,Tij \ni:;,RijfR = \n(-00,00),':. = ~ +E.f=IITiil, Ri > O,Ii: R+ = [0,00) ~ R,Ii is continuous, \nUi = ~,Gi : R ~ (-1,1), Gi is continuously differentiable and strictly monotoni(cid:173)\ncally increasing (Le., Gi( uD > Gi( u~') if and only if u~ > u~'), UiGi( Ui) > 0 for all Ui ::j; 0, \nand Gi(O) = O. In (1), C i denotes capacitance, Rij denotes resistance (possibly includ(cid:173)\ning a sign inversion due to an inverter), Gi (\u00b7) denotes an amplifier nonlinearity, and Ii(') \ndenotes an external input. \n\nIn the literature it is frequently assumed that Tij = Tji for all i,j = 1, ... , N and \nthat Tii = 0 for all i = 1, ... , N. We will make these assumptions only when explicitly \nstated. \n\nWe are interested in the qualitative behavior of solutions of (1) near equilibrium \npoints (rest positions where Ui == 0, for i = 1, ... , N). By setting the external inputs \nUi(t), i = 1, ... , N, equal to zero, we define U* = [ui, ... , u\"NV fRN to be an equilibrium \nfor (1) provided that -biui' + E.f=l Aij Gj(uj) = 0, \nfor i = 1, ... ,N. The locations \nof such equilibria in RN are determined by the interconnection pattern of the neural \nnetwork (i.e., by the parameters Aij, i,j = 1,. \", N) as well as by the parameters bi and \nthe nature of the nonlinearities Gi(')' i = 1, ... ,N. \n\nThroughout, we will assume that a given equilibrium u* being analyzed is an isolated \nequilibrium for (1), i.e., there exists an r > 0 such that in the neighborhood B( u*, r) = \n{( u - u*)fRN : lu - u*1 < r} no equilibrium for (1), other than u = u*, exists. \n\nWhen analyzing the stability properties of a given equilibrium point, we will be able \nto assume, without loss of generality, that this equilibrium is located at the origin u = 0 \nof RN. If this is not the case, a trivial transformation can be employed which shifts the \nequilibrium point to the origin and which leaves the structure of (1) the same. \n\n\f556 \n\nINTERCONNECTED SYSTEMS VIEWPOINT \n\nWe will find it convenient to view system (1) as an interconnection of N free sub(cid:173)\n\nsystems (or isolated sUbsystems) described by equations of the form \n\nUnder this viewpoint, the interconnecting structure of the system (1) is given by \n\nPi = -biPi + Aii Gi(Pi) + Ui(t). \n\nGi(Xb\" . ,xn ) ~ L AijGj(Xj), i = 1, ... ,N. \n\nN \n\nj=1 \nii:i \n\n(2) \n\n(3) \n\nFollowing the method of analysis advanced in6 , we will establish stability results \nwhich are phrased in terms of the qualitative properties of the free subsystems (2) and \nin terms of the properties of the interconnecting structure given in (3). This method \nof analysis makes it often possible to circumvent difficulties that arise in the analysis \nof complex high-dimensional systems. Furthermore, results obtained in this manner \nfrequently yield insight into the dynamic behavior of systems in terms of system com-\nponents and interconnections. \n\n. \n\nGENERAL STABILITY CONDITIONS \n\nWe demonstrate below an example of a result for exponential stability of an equi(cid:173)\n\nlibrium point. The principal Lyapunov stability results for such systems are presented, \ne.g., in Chapter 5 of Ref. 7. \n\nWe will utilize the following hypotheses in our first result. \n\n(A-I) For system (1), the external inputs are all zero, i.e., \n\nUi(t) == 0, \n\ni = 1, ... , N. \n\n(A-2) For system (1), the interconnections satisfy the estimate \n\nfor all Ixil < ri, Ix;1 < rj, i,j = 1, ... , N, where the ail are real constants. \n\n(A-3) There exists an N-vector a> \u00b0 (i.e., aT = (al, ... ,aN) and ai > 0, \n\n1, ... ,N) such that the test matrix S = [Sij] \n\nfor all ~ = \n\nis negative definite, where the bi are defined in (1) and the aij are given in (A-2). \n\n\f557 \n\nWe are now in a position to state and prove the following result. \n\nTheorem 1 The equilibrium x = 0 of the neural network (1) is exponentially stable \nif hypotheses (A-l), (A-2) and (A-3) are satisfied. \n\nProof. For (1) we choose the Lyanpunov function \n\nwhere the ai are given in (A-3). This function is clearly positive definite. The time \nderi vati ve of v along the solutions of (1) is given by \n\n(4) \n\nDV(1)(X) = 2: 2ai(2xd[-biXi + 2: Aij Gj(Xj)] \n\nN 1 \n\ni=1 \n\nwhere (A-l) has been invoked. In view of (A-2) we have \n\nN \n\nj=1 \n\nN \n\nj=1 \n\nDV(1)( x) < 2: ai( -bix~ + Xi 2: aijX j) \n\nN \n\ni=1 \n\nwhere r = mini(ri), IxI2 = (Ef:1 X~) 1/2, and the matrix R = [rij] is given by \n\nfor all IxI2 < r \n\nr;j = { ai( -bi + aii), \n\nai aij, \n\nt = J \ni ::J j. \n\nBut it follows that \n\nxT Rx = xT ( R ~ RT) X = xT Sx ::; )w(S) Ixl1 \n\n(5) \n\nwhere S is the matrix given in (A-3) and AM(S) denotes the largest eigenvalue of \nthe real symmetric matrix S. Since S is by assumption negative definite, we have \nAM(S) < O. It follows from (4) and (5) that in some neighborhood of the origin x = 0, \nwe have c1lxl~ ~ v(x) ~ c2lxl~ and DV(1)(X) ~ -c3Ixl~, where C1 = ! mini ai > 0, \nC2 = ! maxi ai > 0, and C3 = -AM(S) > O. Hence, the equilibrium x = \u00b0 of the neural \n\nnetwork (1) is exponentially stable (c.f. Theorem 9.10 in Ref. 7). \n\nConsistent with the philosophy of viewing the neural network (1) as an intercon(cid:173)\n\nnection of N free subsystems (2), we think of the Lyapunov function (4) as consisting \nof a weighted sum of Lyapunov functions for each free subsystem (2) (with Ui(t) == 0) . \nThe weighting vector a > 0 provides flexibility to emphasize the relative importance \nof the qualitative properties of the various individual subsystems. Hypothesis (A - 2) \nprovides a measure of interaction between the various subsystems (3). Furthermore, it \nis emphasized that Theorem 1 does not require that the parameters Aij in (1) form a \nsymmetric matrix. \n\n\f558 \n\nWEAK COUPLING CONDITIONS \n\nThe test matrix S given in hypothesis (A - 3) has off-diagonal terms which may be \npositive or nonpositive. For the special case where the off-diagonal terms of the test \nmatrix S = [Sij] are non-negative, equivalent stability results may be obtained which are \nmuch easier to apply than Theorem 1. Such results are called weak-coupling conditions \nin the literature6,9. The conditions 8ij ~ 0 for all i ::J j may reflect properties of the \nsystem (1) or they may be the consequence of a majorization process. \n\nIn the proof of the subsequent result, we will make use of some of the properties \nof M- matrices (see, for example, Chapter 2 in Ref. 6). In addition we will use the \nfollowing assumptions. \n\n(A-4) For system (1), the nonlinearity Gi(Xi) satisfies the sector condition \n\n(A-S) The successive principal minors of the N X N test matrix D = [dij ] \n\nare all positive where, the bi and Aij are defined in (1) and Ui2 is defined in (A - 4). \nTheorem 2 The equilibrium x = 0 of the neural network (1) is asymptotically sta(cid:173)\nble if hypotheses (A-1), (A-4) and (A-5) are true. \n\nProof. The proof proceeds10 along lines similar to the one for Theorem 1, this time \nwith the following Lyapunov function \n\nN \n\nv(x) = L: Qilxd\u00b7 \n\ni=l \n\n(6) \n\nThe above Lyapunov function again reflects the interconnected nature of the whole \nsystem. Note that this Lyapunov function may be viewed as a generalized Hamming \ndistance of the state vector from the origin. \n\nESTIMATES OF TRAJECTORY BOUNDS \n\nIn general, one is not only interested in questions concerning the stability of an \n\nequilibrium of the system (1), but also in performance. One way of assessing the qual(cid:173)\nitative properties of the neural system (1) is by investigating solution bounds near an \nequilibrium of interest. We present here such a result by assuming that the hypotheses \nof Theorem 2 are satisfied. \n\nIn the following, we will not require that the external inputs Ui(t), i = 1, ... , N be \n\nzero. However, we will need to make the additional assumptions enumerated below. \n\n\f559 \n\n(A-6) Assume that there exist .xi > 0, for i = 1, ... , N, and an ( > 0 such that \n\n(~~) IAjil > ( > 0, \n\ni = 1, ... ,N \n\nN \n\nL: \nj=1 \ni:/;j \n\nwhere bi and Aij are defined in (1) and (Ti2 is defined in (A-4). \n\n(A-7) Assume that for system (1), \n\nN L: .xiIUi(t)1 ~ k for all \n\ni=l \n\nt ~ 0 \n\nfor some constant k > 0 where the .xi, i = 1, ... , N are defined in (A-6). \n\nIn the proof of our next theorem, we will make use of a comparison result. We \nconsider a scalar comparison equation of the form iJ = G(y) where y(R,G : B(r) - R \nfor some r > 0, and G is continuous on B(r) = {XfR: Ixl < r}. We can then prove the \nfollowing auxiliary theorem: Let p(t) denote the maximal solution of the comparison \nequation with p(to) = Yo(B(r), \nt ~ to ~ 0 is a continuous \nfunction such that r(to) $ Yo, and if r(t) satisfies the differential inequality Dr(t) = \nlimk-+O+ t sup[r(t + k) - r(t)] $ G(r(t)) almost everywhere, then r(t) $ p(t) for t ~ \nto ~ 0, for as long as both r(t) and p(t) exist. For the proof of this result, as well as \nother comparison theorems, see e.g., Refs. 6 and 7. \n\nt ~ to > O. If r(t), \n\nFor the next theorem, we adopt the following notation. We let 6 = mini (Til \nwhere (Til is defined in (A - 4), we let c = (6 , where ( is given in (A-6), and \nwe let \u00a2(t,to,xo) = [\u00a2I(t,to,xo)'''',N(t,to,xo)]T denote the solution of (1) with \n\u00a2(to, to, xo) = Xo = (XlO,\"\" xNol for some to ~ O. \n\nWe are now in a position to prove the following result, which provides bounds for \n\nthe solution of(1). \n\nTheorem 3 Assume that hypotheses (A-6) and (A-7) are satisfied. Then \n\nk) \n11\u00a2(t, to, xo)11 = L...\" .xil\u00a2i(t, to, xo) ::; (a - - e-\nC \n\ni=l \n\nc(t t) \n\n- 0 + -, t ~ to ~ 0 \n\nk \nC \n\n~ ~ \n\nI \n\nprovided that a > k/c and IIxoll = E~l .xilxiOI ~ a, where the .xi, i = 1,. \", N are \n\ngiven in (A-6) and k is given in (A-7). \n\nProof. For (1) we choose the Lyapunov function \n\nN \n\nv(x) = L .xilxil\u00b7 \n\ni=l \n\n(7) \n\n\f560 \n\nAlong the solutions of (1), we obtain \n\nDV(l)(X) ~ AT Dw + z: Ai!Ui(t)\\ \n\nN \n\ni=l \n\n(8) \n\nwhere wT = [G1J;d\\Xl\\,'''' G'Z~N)lxN\\]' A = (A}, ... ,ANf, and D = [dij] is the test \nmatrix given in (A-5). Note that when (A-6) is satisfied, as in the present theorem, \ni = 1, ... , N) \nthen (A-5) is automatically satisfied. Note also that w ~ 0 (Le., Wi ~ 0, \nand w = 0 if and only if x = O. \nUsing manipulations involving (A-6), (A-7) and (8), it is easy to show that DV(l)(X) ~ \n-cv(x) + k. This ineqUality yields now the comparison equation iJ = -cy + k, whose \nunique solution is given by \n\npet, to, Po) = (Po - ~) e-c(t-to) +~, for all t ~ to. \n\nH we let r = v, then we obtain from the comparison result \n\npet) ~ ret) = v(4)(t,to,xo)) = 2: Ail4>i(t,to,xo)1 = 114>(t,to,xo)\\I, \n\nN \n\ni=l \n\ni.e., the desired estimate is true, provided that Ir(to)\\ = Ef:l Ai/XiOI = IIxoll ~ a and \na> kjc. \n\nESTIMATES OF DOMAINS OF ATTRACTION \n\nNeural networks of the type considered herein have many equilibrium points. If \na given equilibrium is asymptotically stable, or exponentially stable, then the extent \nof this stability is of interest. As usual, we assume that x = 0 is the equilibrium of \ninterest. If 4>(t, to, xo) denotes a solution of the network (1) with 4>(to, to, xo) = xo, then \nwe would like to know for which points Xo it is true that 4>( t, to, xo) tends to the origin \nas t ---+ 00. The set of all such points Xo makes up the domain of attraction (the basin of \nattraction) of the equilibrium x = O. In general, one cannot determine such a domain \nin its entirety. However, several techniques have been devised to estimate subsets of \na domain of attraction. We apply one such method to neural networks, making use \nof Theorem 1. This technique is applicable to our other results as well, by making \nappropriate modifications. \n\nWe assume that the hypotheses (A-I), (A-2) and (A-3) are satisfied and for the free \n\nsubsystem (2) we choose the Lyapunov function \n\n1 2 \nVi(Pi) = 2 Pi' \n\n(9) \nThen DVi(2) (Pi) ~ (-bi + aii)p~, \\Pi/ < ri for some ri > O. If (A-3) is satisfied, we \n\nmust have (-bi + aii) < 0 and DVi(2)(Pi) is negative definite over B(ri). \n\nLet Gvo; = {PifR : Vi(Pi) = !p~ < trl ~ Voi}. Then GVo ; is contained in the domain \n\nof attraction of the equilibrium Pi = 0 for the free subsystem (2). \n\nTo obtain an estimate for the domain of attraction of x = 0 for the whole neural \n\nnetwork (1), we use the Lyapunov function \n\n\fN 1 \n\nN \n\nv(x) - '\"' -\"'\u00b7x~ - '\"' o\u00b7v\u00b7(x\u00b7) \n\n-LJ2 ..... \u2022\u2022 -LJ \u2022 \u2022 \u2022 . \n\n561 \n\n(10) \n\nIt is now an easy matter to show that the set \n\ni=l \n\ni=l \n\nC>. = {uRN: v(x) = LOiVi(Xi) < oX} \n\nN \n\ni=l \n\nwill be a subset of the domain of attraction of x = 0 for the neural network (1), where \n\noX = min (OiVOi) = min (~Oir~) . \n\n1$.i$.N 2 \n\n\u2022 \n\nl$.i$.N \n\nIn order to obtain the best estimate of the domain of attraction of x = 0 by the \npresent method, we must choose the 0i in an optimal fashion. The reader is referred to \nthe literature9 ,l3,l4 where several methods to accomplish this are discussed. \n\nINSTABILITY RESULTS \n\nSome of the equilibrium points in a neural network may be unstable. We present \nhere a sample instability theorem which may be viewed as a counterpart to Theorem \n2. Instability results, formulated as counterparts to other stability results of the type \nconsidered herein may be obtained by making appropriate modifications. \n\n(A-B) For system (1), the interconnections satisfy the estimates \n\nXiAiiGi(Xi) < OiAiiX;, \n\nIXiAjjGj(xj)1 $ \n\nIxdlAijlO\"j2l xil, if; j \n\nwhere OJ = O\"il when Aii < 0 and Oi = O\"i2 when Aii > 0 for all IXil < ri, and for \nalllXjl < Tj,i,j = 1, ... ,N. \n\n(A-9) The successive principal minors of the N x N test matrix D = [dij ] given by \n\nare positive, where O\"i = ~ - Au when ifFIl (i.e., stable subsystems) and O\"i \n-!:; + Aji when ifFu (i.e., unstable subsystems) with F = FII U Fu and F = \n{I, ., . , N} and Fu f; . \n\nWe are now in a position to prove the following result. \n\nTheorem 4 The equilibrium x = 0 of the neural network (1) is unstable if hypotheses \n(A-l), (A-8) and (A-g) are satisfied. If in addition, FII = \n( denotes the empty set), \nthen the equilibrium x = 0 is completely unstable. \n\n\f562 \n\nProof. We choose the Lyapunov function \n\nifF .. \n\nifF. \n\nwhere ai > 0, i = 1, ... ,N. Along the solutions of (1) we have (following the proof of \nr = miniri where aT = (a}, ... ,aN), \nTheorem 2), DV(l)(X) $ -aTDw for all x\u20acB(r), \n[ G1l;d IXll, ... , GNx~N) IXNI]. We conclude that \nD is defined in (A-9), and wT = \nDV(l)(X) is negative definite over B(r). Since every neighborhood of the origin x = \u00b0 \ncontains at least one point x' where v(x') < 0, it follows that the equilibrium x = 0 for \n(1) is unstable. Moreover, when F, = , then the function v(x) is negative definite and \nthe equilibrium x = 0 of (1) is in fact completely unstable (c.f. Chapter 5 in Ref. 7). \n\n(11) \n\nSTABILITY UNDER STRUCTURAL PERTURBATIONS \n\nIn specific applications involving adaptive schemes for learning algorithms in neural \nnetworks, the interconnection patterns (and external inputs) are changed to yield an \nevolution of different sets of desired asymptotica.l1y stable equilibrium points with ap(cid:173)\npropriate domains of attraction. The present diagonal dominance conditions (see, e.g., \nhypothesis (A-6)) can be used as constraints to guarantee that the desired equilibria \nalways have the desired stability properties. \n\nTo be more specific, we assume that a given neural network has been designed with a \nset of interconnections whose strengths can be varied from zero to some specified values. \nWe express this by writing in place of (1), \n\nXi = -biXi + L:8ij Aij Gj(Xj) + Ui(t), \n\nN \n\nj=l \n\nfor i = 1, ... ,N, \n\n(12) \n\nwhere 0 $ 8ij $ 1. We also assume that in the given neural network things have been \narranged in such a manner that for some given desired value ~ > 0, it is true that \n~ = mini (!:; - 8iiAii). From what has been said previously, it should now be clear \nthat if Ui( t) == 0, i = 1, ... ,N and if the diagonal dominance conditions \n\n~ - t (~~) 18ij Aiji > 0, for i = 1, ... ,N \n\n(13) \n\nj = 1 \ni:f;j \n\nare satisfied for some Ai > 0, i = 1, ... , N, then the equilibrium x = \u00b0 for (12) will be \n\nasymptotically stable. It is important to recognize that condition (13) constitutes a sin(cid:173)\ngle stability condition for the neural network under structural perturbations. Thus, the \nstrengths of interconnections of the neural network may be rearranged in any manner \nto achieve some desired set of equilibrium points. If (13) is satisfied, then these equi(cid:173)\nlibria will be asymptotically stable. (Stability under structural perturbations is nicely \nsurveyed in Ref. 15.) \n\n\f563 \n\nCONCLUDING REMARKS \n\nIn the present paper we surveyed and applied results from the qualitative theory \n\nof large scale interconnected dynamical systems in order to develop a qualitative the(cid:173)\nory for neural networks of the Hopfield type. Our results are local and use as much \ninformation as possible in the analysis of a given eqUilibrium. In doing so, we estab(cid:173)\nlished cri-teria for the exponential stability, asymptotic stability, and instability of an \nequilibrium in such networks. We also devised methods for estimating the domain of \nattraction of an asymptotically stable equilibrium and for estimating trajectory bounds \nfor such networks. Furthermore, we showed that our stability results are applicable \nto systems under structural perturbations (e.g., as experienced in neural networks in \nadaptive learning schemes). \n\nIn arriving at the above results, we viewed neural networks as an interconnection \n\nof many single neurons, and we phrased our results in terms of the qualitative proper(cid:173)\nties of the free single neurons and in terms of the network interconnecting structure. \nThis viewpoint is particularly well suited for the study of hierarchical structures which \nnaturally lend themselves to implementations16 in VLSI. Furthermore, this type of ap(cid:173)\nproach makes it possible to circumvent difficulties which usually arise in the analysis \nand synthesis of complex high dimensional systems. \n\nREFERENCES \n\n[1] For a review, see, Neural Networks for Computing, J. S. Denker, Editor, American \n\nInstitute of Physics Conference Proceedings 151, Snowbird, Utah, 1986. \n\n[2] J. J. Hopfield and D. W. Tank, Science 233, 625 (1986). \n[3] J. J. Hopfield, Proc. Natl. Acad. Sci. U.S.A. 79,2554 (1982), and ibid. 81,3088 \n\n(1984). \n\n[4] G. E. Hinton and J. A. Anderson, Editors, Parallel Models of Associative Memory, \n\nErlbaum, 1981. \n\n[5] T. Kohonen, Self-Organization and Associative Memory, Springer-Verlag, 1984. \n[6] A. N. Michel and R. K. Miller, Qualitative Analysis of Large Scale Dynamical \n\nSystems, Academic Press, 1977. \n\n[7] R. K. Miller and A. N. Michel, Ordinary Differential Equations, Academic Press, \n\n1982. \n\n[8] I. W. Sandberg, Bell System Tech. J. 48, 35 (1969). \n[9] A. N. Michel, IEEE Trans. on Automatic Control 28, 639 (1983). \n[10] A. N. Michel, J. A. Farrell, and W. Porod, submitted for publication. \n[11] J.-H. Li, A. N. Michel, and W. Porod, IEEE Trans. Cire. and Syst., in press. \n[12] G. A. Carpenter, M. A. Cohen, and S. Grossberg, Science 235, 1226 (1987). \n[13] M. A. Pai, Power System Stability, Amsterdam, North Holland, 1981. \n[14] A. N. Michel, N. R. Sarabudla, and R. K. Miller, Circuits, Systems and Signal \n\nProcessing 1, 171 (1982). \n\n[15] Lj. T. Grujic, A. A. Martynyuk and M. Ribbens-Pavella, Stability of Large-Scale \nSystems Under Structural and Singular Perturbations, Nauka Dumka, Kiev, 1984. \n\n[16] D. K. Ferry and W. Porod, Superlattices and Microstructures 2, 41 (1986). \n\n\f", "award": [], "sourceid": 36, "authors": [{"given_name": "Anthony", "family_name": "Michel", "institution": null}, {"given_name": "Jay", "family_name": "Farrell", "institution": null}, {"given_name": "Wolfgang", "family_name": "Porod", "institution": null}]}