{"title": "Stochastic Neurodynamics", "book": "Advances in Neural Information Processing Systems", "page_first": 62, "page_last": 69, "abstract": null, "full_text": "Stochastic Neurodynamics \n\nJ.D. Cowan \n\nDepartment of Mathematics, Committee on \nNeurobiology, and Brain Research Institute, \n\nThe University of Chicago, 5734 S. Univ. Ave., \n\nChicago, Illinois 60637 \n\nAbstract \n\nThe main point of this paper is that stochastic neural networks have a \nmathematical structure that corresponds quite closely with that of \nquantum field theory. Neural network Liouvillians and Lagrangians \ncan be derived, just as can spin Hamiltonians and Lagrangians in QFf. \nIt remains to show the efficacy of such a description. \n\n1 INTRODUCTION \n\nA basic problem in the analysis of large-scale neural network activity, is that one can \nnever know the initial state of such activity, nor can one safely assume that synaptic \nweights are symmetric, or skew-symmetric. How can one proceed, therefore, to analyse \nsuch activity? One answer is to use a \"Master Equation\" (Van Kampen, 1981). In \nprinciple this can provide statistical information, moments and correlation functions of \nnetwork activity by making use of ensemble averaging over all possible initial states. In \nwhat follows I give a short account of such an approach. \n\n1.1 THE BASIC NEURAL MODEL \n\nIn this approach neurons are represented as simple gating elements which cycle through \nseveral internal states whenever the net voltage generated at their activated post-synaptic \n\n62 \n\n\fStochastic Neurodynamics \n\n63 \n\nsites exceeds a threshold. These states are \"quiescent\", \"activated\", and \"refractory\", \nlabelled 'q', 'a', and 'r'respectively. There are then four transitions to consider: q ~ a, r ~ \na, a ~ r, and r ~ q. Two of these, q ~ a, and r ~ a, are functions of the neural \nmembrane current. I assume that on the time scale measured in units of 'tm , the \nmembrane time constant, the instantaneous transition rate A(q ~ a) is a smooth function \nof the input current. Ji(T). The transition rates A(q ~ a) and A(r ~ a) are then given by: \n\nand \n\nAq = e[(J(T)/Jq)-I] = eq[J(T)], \n\nAr = e[(J(T)/Jr)-I] = er[J(T)], \n\n(1) \n\n(2) \n\nrespectively, where Jq and Jr are the threshold currents related to 8 q and 8 r' and where \nis a suitable smoothly increasing function of x, and T = t/'tm.. The other two \ne [x] \ntransition rates, A(a ~ r) and A(r ~ q) are defined simply as constants eX and :13. Figure 1 \nshows the \"kinetic\" scheme that results. Implicit in this scheme is the smoothing of input \ncurrent pulses that takes place in the membrane,and also the smoothing caused by the \n\nFigure 1. Neural state transition rates \n\npresumed asynchronous activation of synapses. This simplified description of neural \nstate transitions is essential to our investigation of cooperative effects in large nets. \n\n1.2 PROBABILITY DISTRIBUTIONS FOR NEURAL NETWORK ACTIVITY \n\nThe configuration space of a neural network is the space of distinguishable patterns of \nneural activity. Since each neuron can be in the state q, a or r, there are 3N such patterns \nin a network of N neurons. Since N is 0(1010), the configuration space is in principle \nvery large. This observation, together with the existence of random fluctuations of neural \n\n\f64 \n\nCowan \n\nactivity, and the impracticability of specifying the initial states of all the neurons in a \nlarge network, indicates the need for a probabilistic description of the formation and de(cid:173)\ncay of patterns of neural activity. \n\nLet Q(T), A(T), R(T) denote the numbers of quiescent. activated. and refractory neurons \nin a network ofN neurons at time T. Evidently, \nQ(T)+A(T)+R(T) = N. \n\n(3) \n\nConsider therefore N neurons in a d-dimensional lattice. Let a neural state vector be \ndenoted by \n\n(4) \n\nwhere vi means the neuron at the site i is in the state v = q. a \u2022 or r. Let P[Q(T)] be the \nprobability of finding the network in state I Q > at time T. and let \n\nI P(T) > = LP[Q(T)]I Q> \n\nQ \n\n(5) \n\nbe a neural probability state vector. Evidently LP[Q(T)] = 1. \n\n(6) \n\nQ \n\n1.3 A NEURAL NETWORK MASTER EQUATION \n\nNow consider the most probable state transitions which can occur in an asynchronous \nnoisy network. These are: \n\n(Q. A. R) ~ (Q. A. R) no change \n\n(Q+I. A-I. R) -+ (Q, A. R) activation of a quiescent cell \n(Q. A-I, R+ I) -+ (Q. A. R) activation of a refractory cell \n(Q, A+I, R-I) -+ (Q, A. R) an activated cell becomes refractory \n(Q-I. A, R + I) ~ (Q, A. R) a refractory cell beomes quiescent. \n\nAll other transitions, e.g., those involving two or more transitions in time dT, are \nassumed to occur with probability O(dT). \n\nThese state transitions can be represented by the action on a set of basis vectors, of \ncertain matrices. Let the basis vectors be: \n\n\fStochastic N eurodynamics \n\n65 \n\nIq>=G} 13>=() \n\nIt> =U) \n\n(7) \n\nand consider the Gell-Mann matrices representing the Lie Group SU(3) (Georgi, 1982) : \n\nC 1.) \n~:: \ne-i\n\n: .. \n1 \u2022. \n\n) \n\nAl = \n\nAS = \n\n(_ -i. ) \nA2= ~:: \n\n4= C i) \n\n. 1 . \n\nC --) \n\n: -.1: \n\nA3 = \n\n4= \n\n. . . \n1.. \n\n(\"\"1) \n1{1. -) \n\n. 1 ~ \n.. 2 \n\n1..7 = (_: -i) A8 =..J \n\n. 1 \u2022 \n\n(8) \n\nand the raising and lowering operators: \n\nA \u00b1 I = ~ (4 \u00b1 i AS). A \u00b1 2 = ! (AI \u00b1 i AV, A \u00b1 3 = ! (4\u00b1 i A7) . \n\n(9) \nIt is easy to see that these operators act on the basis vectors I v > as shown in figure 2. \n\nFigure 2. Neural State Transitions generated by the \nraising and lowering operators of the Lie Group \nSU(3). \n\nIt also follows that: \n\n1 \n\ni \n\n(10) \n\nand that: \n\nJi = ~ Wij A +Ij A -Ij = ~ Wij A +2j A -2j . \n\n(11) \n\nJ \n\nJ \n\nThe entire sequence of neural state transition into (Q,A,R) can be represented by the \noperator \"Liouvillian\": \n\n\f66 \n\nCowan \n\n1 \n\n1 \n\n1 \n\n1 \n\n+ N ~ (A -Ii - 1) A +Ii 9q[JJ + N ~ (A -2i - 1) A +2i 9r[Ji] . \n\n1 \n\n1 \n\nThis operator acts on the state function I P(T\u00bb according to the equation: \n\na \naT I P(T\u00bb = -L I P(T\u00bb. \n\n(12) \n\n(13) \n\nThis is the neural network analogue of the Schrodinger equation, except that P[O (T)] = \n< 0 IP(T\u00bb \nis a real probability distribution, and L is not Hermitian. In fact this equation \nis a Markovian representation of neural network activity (Doi, 1976; Grassberger & \nScheunert, 1980), and is the required master equation. \n\n1.4 A SPECIAL CASE: TWO-STATE NEURONS \n\nIt is helpful to consider the simpler case of two state neurons first, since the group \nalgebra is much simpler. \nI therefore neglect the refractory state, and use the two \ndimensional basis vectors: \n\nla>= (~) \n\n(14) \n\ncorresponding to the kinetic scheme shown in figure 3a: \n\nex \n\n(a) \n\n(b) \n\nFigure 3. (a) Neural State Transitions in the two(cid:173)\nstate case, (b) Neural State Transitions generated by \nthe raising and lowering operators of the Lie Group \nSU(2). \n\n\fThe relevant matrices are the well-known Pauli spin matrices representing the Lie Group \nSU(2) (Georgi, 1982): \n\nStochastic Neurodynamics \n\n67 \n\n = exp ( L <\\* \n\ni \n\n(21) \n\nwhere ex is a complex number, and < 0 I is the \"vacuum\" state < q1q2 ...... = < a I LP[o(n]1 0> = LP[O(T)] < a 10> . \n\nvI v2 \nIt can be shown that < a I > = Cf a 2 \n\no \nO \n\n0 \n\n\\IN \n............. a N \n\nand \n\nthat < a I P > = \n\nG( a l a2 .... ~) the moment generating function for the probability distribution p(n. \n\nIt can then be shown that: \n\n(22) \n\n(23) \n\na \nand Ii =:L Wij Da\u00b7-. \nJaa. \nJ \n\nj \n\n1 \n\naG \naT = a ~ (D'1 - 1) -\n\n[ \n\n+ \n\na \naa. 1 \na \nDa. = a.(I- a.-) \n1 a a\u00b7 1 \n\n1 \n\n1 \n\nwhere \n\ni.e.; the moment generating equation expressed in the \"oscillator-algebra\" representation. \n\n1.5 A NEURAL NETWORK PATH INTEGRAL \n\nThe content of eqns. (22) and (23) can be summarized in a Wiener-Feynman Path \nIntegral (Schulman 1981). It can be shown that the transition probability of reaching a \n\nstate 0' (T) given the initial state O(To), the so-called propagator (1(0', T 10, TO) , \ncan be expressed as the Path Integral: \nIn Dai (T') exp [ IT {~ 2: (D'ai Dai - Dai D'ai ) - L(Dai , Dai ) }], \n\n(24) \n\nTO \n\n* \n\n* \n\n* \n\n1 \n\n1 \n\n1 \n\nwhere D'a. = aaT Da. and D a. (T') = \n\n1 \n\n1 \n\n1 \n\n( ~ )n lim n->oo n * ' and \n\nd2ai (j) \n\nn \n\nj=l (1+a. (j)a. (j\u00bb)3 \n\n1 \n\n1 \n\n7t \n\nwhere d2a = d(R1 a) d(Im a). This propagator is sometimes written as an expectation \nwith respect to the Wiener measure In Da. (T) as: \n\n. \n1 \n\n1 \n\n(1(0' I 0) = < exp [-\n\nTo \nI dT' L] > \nT \n\n(25) \n\nwhere the neural network Lagrangian is defined as: \n\n\fStochastic N eurodynamics \n\n69 \n\n* \nL = L(Du1\u00b7, Du1\u00b7) - L -2 (D'u.Du. - Du. D'u.). \n\n* \n\n1 \n\n* \n\n1 \n\n(26) \n\n. \n1 \n\n1 \n\n1 \n\n1 \n\nThe propagator a contains all the statistics of the network activity. Steepest descent \n\nmethods, asymptotics, and Liapunov-Schmidt bifurcation methods may be used to \nevaluate it. \n\n2 CONCLUSIONS \n\nThe main point of this paper is that stochastic neural networks have a mathematical \nstructure that corresponds quite closely with that of quantum field theory. Neural \nnetwork Liouvillians and Lagrangians can be derived, just as can spin Hamiltonians and \nLagrangians in QFf. It remains to show the efficacy of such a description. \n\nAcknowledgements \n\nThe early stages of this work were carried out in part with Alan Lapedes and David Sharp \nof the Los Alamos National Laboratory. We thank the Santa Fe Institute for hospitality \nand facilities during this work, which was supported in part by grant # NOOO 14-89-J -1099 \nfrom the US Department of the Navy, Office of Naval Research. \n\nReferences \n\nVan Kampen, N. (1981), Stochastic Processes in Physics & Chemistry (N. Holland, \nAmsterdam). \nGeorgi, H. (1982), Lie Algebras in Particle Physics (Benjamin Books, Menlo Park) \nDoi, M. (1976), J.Phys. A. Math. Gen. 9,9,1465-1477; 1479-1495 \nGrassberger, P. & Scheunert, M. (1980), Fortschritte der Physik 28, 547-578. \nHecht, K.T. (1987), The Vector Coherent State Method (Springer, New York) \nPerelomov, A. (1986), Generalized Coherent States and Their Applications (Springer, \nNew York). \nMatsubara, T & Matsuda, H. (1956). A lattice model of Liquid Helium, I. Prog. Theoret. \nPhys. 16,6, 569-582. \nSchulman, L. (1981), Techniques and Applications of Path Integration (Wiley, New \nYork). \n\n\f", "award": [], "sourceid": 424, "authors": [{"given_name": "J.D.", "family_name": "Cowan", "institution": null}]}