{"title": "Neural Network Methods for Optimization Problems", "book": "Advances in Neural Information Processing Systems", "page_first": 1184, "page_last": 1185, "abstract": "", "full_text": "Neural Network Methods  for \n\nOptimization Problems \n\nArun Jagota \n\nDepartment of Mathematical Sciences \n\nMemphis State University \n\nMemphis,  TN  38152 \n\nE-mail:  jagota~nextl.msci.memst.edu \n\nIn  a  talk  entitled  \"Trajectory  Control  of Convergent  Networks  with  applications \nto TSP\", Natan Peterfreund (Computer Science, Technion)  dealt with the problem \nof controlling the  trajectories of continuous convergent neural  networks models for \nsolving optimization  problems, without affecting  their equilibria set and  their  con(cid:173)\nvergence  properties.  Natan  presented  a  class  of feedback  control  functions  which \nachieve this objective, while also improving the convergence rates.  A modified Hop(cid:173)\nfield  and  Tank  neural  network  model,  developed  through  the  proposed  feedback \napproach,  was  found  to  substantially  improve  the  results  of the  original  model in \nsolving  the Traveling Salesman  Problem.  The  proposed feedback  overcame the 2n \nsymmetric property of the TSP problem. \n\nIn  a  talk  entitled  \"Training Feedforward  Neural  Networks  quickly  and  accurately \nusing  Very  Fast Simulated Reannealing  Methods\",  Bruce  Rosen  (Asst.  Professor, \nComputer Science,  UT San Antonio)  presented  the Very Fast Simulated Reanneal(cid:173)\ning  (VFSR)  algorithm for  training feedforward neural networks [2].  VFSR Trained \nnetworks avoid getting stuck  in  local  minima and  statistically  guarantee the  find(cid:173)\ning  of an  optimal  weights set.  The  method  can  be  used  when  network  activation \nfunctions  are  nondifferentiable,  and although often slower than gradient descent,  it \nis  faster  than other Simulated Annealing methods.  The performances of conjugate \ngradient descent and VFSR trained networks were demonstrated on a set of difficult \nlogic  problems. \nIn  a  talk  entitled  \"A  General  Method  for  Finding Solutions  of Covering problems \nby Neural Computation\", Tal Grossman (Complex Systems, Los Alamos)  presented \na  neural  network  algorithm for  finding  small  minimal  covers of hypergraphs.  The \nnetwork  has  two sets  of units,  the first  representing  the  hyperedges  to  be  covered \nand  the  second  representing  the  vertices.  The  connections  between  the  units  are \ndetermined  by  the edges  of the incidence graph.  The dynamics  of these  two types \nof units  are  different.  When  the  parameters  of the  units  are  correctly  tuned,  the \nstable states  of the  system  correspond  to  the  possible  covers.  As  an  example,  he \nfound  new  large square free  subgraphs of the hypercube. \n\nIn  a  talk  entitled  \"Algebraic  and  Grammatical  Design  of Relaxation  Nets\",  Eric \n\n1184 \n\n\fNeural Network Methods for Optimization Problems \n\n1185 \n\nMjolsness (Professor, Computer Science, Yale University) presented useful algebraic \nnotation and computer-algebraic syntax for  general  \"programming\"  with optimiza(cid:173)\ntion  ideas;  and  also  some  optimization  methods  that  can  be  succinctly  stated  in \nthe  proposed  notation.  He  addressed  global  versus  local  optimization,  time  and \nspace  cost,  learning,  expressiveness  and  scope,  and  validation on  applications.  He \ndiscussed the methods of algebraic expression (optimization syntax and transforma(cid:173)\ntions, grammar models), quantitative methods (statistics and statistical mechanics, \nmultiscale algorithms, optimization methods), and the systematic design approach. \n\nIn a  talk entitled  \"Algorithms for  Touring Knights\", Ian  Parberry (Associate  Pro(cid:173)\nfessor,  Computer Sciences,  University of North Texas) compared Takefuji and Lee's \nneural  network  for  knight's  tours  with  a  random  walk  and  a  divide-and-conquer \nalgorithm.  The  experimental  and  theoretical  evidence  indicated  that  the  neural \nnetwork  is  the  slowest  approach,  both  on  a  sequential  computer  and  in  parallel, \nand for  the problems of generating  a  single  tour,  and  generating  as  many tours  as \npossible. \n\nIn a talk entitled \"Report on the DIMACS Combinatorial Optimization Challenge\" , \nArun Jagota (Asst.  Professor, Math Sciences, Memphis State University) presented \nhis  work,  towards  the  said  challenge,  on  neural  network  methods  for  the  fast  ap(cid:173)\nproximate  solution  of  the  Maximum  Clique  problem.  The  Mean  Field  Anneal(cid:173)\ning  algorithm  was  implemented  on  the  Connection  Machine  CM-5.  A  fast  (two(cid:173)\ntemperature)  annealing schedule  was  experimentally evaluated  on  random  graphs \nand on the challenge benchmark graphs, and was shown to work well.  Several other \nalgorithms, of the randomized local search kind, including one employing reinforce(cid:173)\nment learning ideas, were also evaluated on the same graphs.  It was concluded that \nthe  neural  network  algorithms  were  in  the  middle  in  the  solution  quality  versus \nrunning  time trade-off,  in  comparison with a  variety of conventional methods. \n\nIn  a  talk entitled  \"Optimality in Biological and Artificial  Networks\" , Daniel Levine \n(Professor,  Mathematics,  UT  Arlington)  previewed  a  book  to  appear  in  1995  [1]. \nThen  he  expanded  his  own  view,  that  human  cognitive  functioning  is  sometimes, \nbut not  always or even  most  of the  time,  optimal.  There is  a  continuum from  the \nmost  \"disintegrated\"  behavior, associated  with  frontal lobe damage, to stereotyped \nor  obsessive-compulsive behavior,  to entrenched  neurotic  and  bureaucratic  habits, \nto  rational  maximization  of some  measurable criteria,  and  finally  to  the  most  \"in(cid:173)\ntegrated\" , self-actualization  (Abraham Maslow's term)  which  includes both  reason \nand  intuition.  He  outlined  an  alternative  to simulated annealing,  whereby  a  net(cid:173)\nwork  that  has reached  an  energy minimum in  some but not  all  of its  variables  can \nmove out of it through a  \"negative affect\"  signal  that responds  to a  comparison of \nenergy functions  between the current state and imagined  alternative states. \n\nReferences \n\n[1]  D.S.  Levine  &  W.  Elsberry,  editors.  Optimality  in  Biological  and  Artificial \n\nNetworks?  Lawrence Erlbaum Associates,  1995. \n\n[2]  B.  E.  Rosen  &  J.  M.  Goodwin.  Training hard to learn networks using advanced \n\nsimulated annealing  methods.  In  Proc.  of A CM Symp.  on  Applied  Comp .. \n\n\f", "award": [], "sourceid": 753, "authors": [{"given_name": "Arun", "family_name": "Jagota", "institution": null}]}