{"title": "A Neural-Network Solution to the Concentrator Assignment Problem", "book": "Neural Information Processing Systems", "page_first": 775, "page_last": 782, "abstract": null, "full_text": "775 \n\nA NEURAL-NETWORK SOLUTION TO THE CONCENTRATOR \n\nASSIGNNlENT PROBLEM \n\nGene A. Tagliarini \nEdward W. Page \n\nDepartment of Computer Science, Clemson University, Clemson, SC \n\n29634-1906 \nABSTRACT \n\nNetworks of simple analog processors having neuron-like properties have \nbeen employed to compute good solutions to a variety of optimization prob(cid:173)\nlems. This paper presents a neural-net solution to a resource allocation prob(cid:173)\nlem that arises in providing local access to the backbone of a wide-area com(cid:173)\nmunication network. The problem is described in terms of an energy function \nthat can be mapped onto an analog computational network. Simulation results \ncharacterizing the performance of the neural computation are also presented. \n\nINTRODUCTION \n\nThis paper presents a neural-network solution to a resource allocation \nproblem that arises in providing access to the backbone of a communication \nnetwork. 1 In the field of operations research, this problem was first known as \nthe warehouse location problem and heuristics for finding feasible, suboptimal \nsolutions have been developed previously.2. 3 More recently it has been known \nas the multifacility location problem4 and as the concentrator assignment prob(cid:173)\nlem.1 \n\nTHE HOPFIELD NEURAL NETWORK MODEL \n\nThe general structure of the Hopfield neural network model5 \u2022 6,7 is illus(cid:173)\n\ntrated in Fig. 1. Neurons are modeled as amplifiers that have a sigmoid input! \noutput curve as shown in Fig. 2. Synapses are modeled by permitting the out(cid:173)\nput of any neuron to be connected to the input of any other neuron. The \nstrength of the synapse is modeled by a resistive connection between the output \nof a neuron and the input to another. The amplifiers provide integrative analog \nsummation of the currents that result from the connections to other neurons as \nwell as connection to external inputs. To model both excitatory and inhibitory \nsynaptic links, each amplifier provides both a normal output V and an inverted \noutput V. The normal outputs range between 0 and 1 while the inverting am(cid:173)\nplifier produces corresponding values between 0 and -1. The synaptic link be(cid:173)\ntween the output of one amplifier and the input of another is defined by a \nconductance Tij which connects one of the outputs of amplifier j to the input of \namplifier i. In the Hopfield model, the connection between neurons i and j is \nmade with a resistor having a value Rij = 1rrij . To provide an excitatory synap(cid:173)\ntic connection (positive Tij ), the resistor is connected to the normal output of \n\nThis research was supported by the U.S. Army Strategic Defense Command. \n\n\u00a9 American Institute of Physics 1988 \n\n\f776 \n\n13 \n\n14 \n\ninputs \n\nVI \n\nV4 \n\nV3 \n\nV2 \n\noutputs \nFig. 1. Schematic for a simplified \nHopfield network with four neurons. \n\n1 \n\nV \n\no \n\n-u \n\no \n\n+u \n\nFig. 2. Amplifier input/output \n\nrelationship \n\namplifier j. To provide an inhibitory connection (negative Tij), the resistor is \nconnected to the inverted output of amplifier j. The connections among the \nneurons are defined by a matrix T consisting of the conductances Tij . Hop-\nfield has shown that a symmetric T matrix (Tij = Tji ) whose diagonal entries \nare all zeros, causes convergence to a stable state in which the output of each \namplifier is either 0 or 1. Additionally, when the amplifiers are operated in the \nhigh-gain mode, the stable states of a network of n neurons correspond to the \nlocal minima of the quantity \n\nn \n\nn \nE = (-112) L L \nj=l \n\ni=l \n\nT\u00b7V.V\u00b7 \nIJ 1 J \n\nn \nL V.I\u00b7 \n\nI 1 \n\n(1) \n\nwhere Vi is the output of the ith neuron and Ii is the externally supplied input \nto the ph neuron. Hopfield refers to E as the computational energy of the sys(cid:173)\ntem. \n\nTHE CONCENTRATOR ASSIGNMENT PROBLEM \n\nConsider a collection of n sites that are to be connected to m concentra(cid:173)\n\ntors as illustrated in Fig. 3(a). The sites are indicated by the shaded circles \nand the concentrators are indicated by squares. The problem is to find an \nassignment of sites to concentrators that minimizes the total cost of the assign(cid:173)\nment and does not exceed the capacity of any concentrator. The constraints \nthat must be met can be summarized as follows: \n\na) Each site i ( i = 1, 2, ... , n ) is connected to exactly one concentrator; \n\nand \n\n\f777 \n\nb) Each concentrator j (j = 1, 2, ... , m ) is connected to no more than kj \n\nsites (where kj is the capacity of concentrator D. \n\nFigure 3(b) illustrates a possible solution to the problem represented in Fig. \n3(a). \n\n0 \n\n\u2022 \n\u2022 \u2022 \u2022 \n\n\u2022 \n\u2022 \u2022 \n\u2022 \n\n0 \n\n\u2022 \u2022 \n\n0 \n\n\u2022 \n\n\u2022 \n\n0 \n\no Concentrators \u2022 Sites \n\n(a). Site/concentrator map \n\n(b). Possible assignment \n\nFig. 3. Example concentrator assignment problem \n\nIf the cost of assigning site i to concentrator j is cij , then the total cost of \n\na particular assignment is \n\ntotal cost = \n\nn m \nL L \nj=l \ni=l \n\nx \u00b7\u00b7 c\u00b7\u00b7 \nIJ \n\nIJ \n\n(2) \n\nwhere Xij = 1 only if we actually decide to assign site i to concentrator j and is 0 \notherwise. There are mn possible assignments of sites to concentrators that \nsatisfy constraint a). Exhaustive search techniques are therefore impractical \nexcept for relatively small values of m and n. \n\nTHE NEURAL NETWORK SOLUTION \n\nThis problem is amenable to solution using the Hopfield neural network \n\nmodel. The Hopfield model is used to represent a matrix of possible assign(cid:173)\nments of sites to concentrators as illustrated in Fig. 4. Each square corresponds \n\n\f778 \n\nS \n\nITES ~~ , . . \u2022 \n\nCONCENTRATORS \n1 \nm \nr , ; - - - - - - ; - , \n/ r 1 ,II 11- --III ---III, \n2 ,~ .---~ ---~I \n\u2022 \u2022 \u2022 \u2022 \n\u2022 I The darkly shaded neu-\ni III 11- --II ---II I ron corresponds to the \nhypothesis that site i \n'~n 'Ii \u2022 ---Ii ---Ii ' should be as~igned to \n~ \n\" n+l II 111---11---11 \nSLACK .... < n+2 III II ---~ ---\u2022 \n\u2022 \n\u2022 \n,~n+k j II 11- --III ---III \n\n:.J concentrator J. \n\n: : : \n\n~ -\n\n-\n\n-\n\n-\n\n-\n\n-\n\n: \n\n2 \n\nj \n\n\u2022 \n\nFig. 4. Concentrator assignment array \n\nto a neuron and a neuron in row i and column j of the upper n rows of the \narray represents the hypothesis that site i should be connected to concentrator \nj. If the neuron in row i and column j is on, then site i should be assigned to \nconcentrator j; if it is off, site i should not be assigned to concentrator j. \n\nThe neurons in the lower sub-array, indicated as \"SLACK\", are used to \nimplement individual concentrator capacity constraints. The number of slack \nneurons in a column should equal the capacity (expressed as the number sites \nwhich can be accommodated) of the corresponding concentrator. While it is \nnot necessary to assume that the concentrators have equal capacities, it was \nassumed here that they did and that their cumulative capacity is greater than or \nequal to the number of sites. \n\nTo ena~le the neurons in the network illustrated above to compute solu(cid:173)\n\ntions to the concentrator problem, the network must realize an energy function \nin which the lowest energy states correspond to the least cost assignments. The \nenergy function must therefore favor states which satisfy constraints a) and b) \nabove as well as states that correspond to a minimum cost assignment. The \nenergy function is implemented in terms of connection strengths between neu(cid:173)\nrons. The following section details the construction of an appropriate energy \nfunction. \n\n\fTHE ENERGY FUNCTION \n\nConsider the following energy equation: \n\n2 \nE = A L ( L y .. - 1 ) \n\nm \n\nn \n\n. 1 \n1= \n\n. 1 \nJ= \n\n1J \n\nm \n\nn+k\u00b7 \n\nB L ( L J y .. - k . )2 \n\n+ \n\nj=1 \n\ni=1 \n\nIJ \n\nJ \n\n779 \n\n(3~ \n\nm n+kj \n\n+ C L L y.. ( 1 - Yij ) \n\nj=1 \n\ni=1 \n\n1J \n\nwhere Yij is the output of the amplifier in row i and column j of the neuron \nmatrix, m and n are the number of concentrators and the number of sites \nrespectively, and kj is the capacity of concentrator j. \n\nThe first term will be minimum when the sum of the outputs in each row \nof neurons associated with a site equals one. Notice that this term influences \nonly those rows of neurons which correspond to sites; no term is used to coerce \nthe rows of slack neurons into a particular state. \n\nThe second term of the equation will be minimum when the sum of the \n\noutputs in each column equals the capacity kj of the corresponding concentra(cid:173)\ntor. The presence of the kj slack neurons in each column allows this term to \nenforce the concentrator capacity restrictions. The effect of this term upon the \nupper sub-array of neurons (those which correspond to site assignments) is \nthat no more than kj sites will be assigned to concentrator j. The number of \nneurons to be turned on in column j is kj ; consequently, the number of neu(cid:173)\nrons turned on in column j of the assignment sub-array will be less than or \nequal to kj \n\n. \n\nThe third term causes the energy function to favor the \"zero\" and \"one\" \nstates of the individual neurons by being minimum when all neurons are in one \nor the other of these states. This term influences all neurons in the network. \nIn summary, the first term enforces constraint a) and the second term \n\nenforces constraint b) above. The third term guarantees that a choice is actu(cid:173)\nally made; it assures that each neuron in the matrix will assume a final state \nnear zero or one corresponding to the Xij term of the cost equation (Eq. 2). \nAfter some algebraic re-arrangement, Eq. 3 can be written in the form of \n\nEq. 1 where \n\nT IJ kl = \n\n., \n, \n\nC * 8U,I) * (1-8(i,k\u00bb, if i>n or k>n. \n\n{A * 8(i,k) * (1-8U,I) + B * 8U,1) * (1-8(i,k\u00bb, if i