{"title": "Utilizing lime: Asynchronous Binding", "book": "Advances in Neural Information Processing Systems", "page_first": 38, "page_last": 44, "abstract": null, "full_text": "Utilizing  Time:  Asynchronous Binding \n\nBradley  C.  Love \n\nDepartment of Psychology \nNorthwestern  University \n\nEvanston, IL  60208 \n\nAbstract \n\nHistorically,  connectionist  systems  have  not excelled  at  represent(cid:173)\ning and manipulating complex structures.  How  can a  system com(cid:173)\nposed  of simple  neuron-like  computing  elements  encode  complex \nrelations?  Recently,  researchers have begun to appreciate that rep(cid:173)\nresentations can extend in  both time and space.  Many researchers \nhave proposed that the synchronous firing of units can encode com(cid:173)\nplex  representations.  I  identify  the  limitations  of  this  approach \nand present an  asynchronous model of binding that effectively rep(cid:173)\nresents  complex  structures.  The  asynchronous model  extends  the \nsynchronous approach.  I argue that our cognitive architecture uti(cid:173)\nlizes  a similar mechanism. \n\n1 \n\nIntroduction \n\nSimple  connectionist  models  can  fall  prey  to  the  \"binding  problem\" .  A  binding \nproblem  occurs  when  two  different  events  (or  objects)  are  represented  identically. \nFor  example,  representing  \"John  hit  Ted\"  by  activating  the  units  JOHN,  HIT, \nand TED would  lead to a  binding problem because the same pattern of activation \nwould also be used to represent  \"Ted hit John\".  The binding problem is ubiquitous \nand  is  a  concern  whenever  internal  representations  are  postulated.  In  addition \nto  guarding  against  the  binding  problem,  an  effective  binding  mechanism  must \nconstruct representations that assist processing.  For instance, different states of the \nworld  must  be  represented  in  a  manner  that  assists  in  discovering  commonalities \nbetween disparate states, allowing for category formation and analogical processing. \n\nInterestingly,  new  connectionist  binding mechanisms  [5,  9,  12J  utilize  time in  their \noperation.  Pollack's Recursive Auto-Associative Memory  (RAAM)  model combines \na  standard fixed-width  multi-layer network architecture with  a  stack and  a  simple \ncontroller, enabling RAAM to encode hierarchical representations over multiple pro(cid:173)\ncessing steps.  RAAM  requires more time to encode representations as they become \nmore  complex,  but  its  space  requirements  remain  constant.  The clearest  example \n\n\fUtilizing Time:  Asynchronous Binding \n\n39 \n\nof utilizing  time  are  models  that  perform  dynamic  binding  through  synchronous \nfirings  of units  [17,  5,  12].  Synchrony models  explicitly  use  time to mark relations \nbetween  units,  distributing complex representations across multiple time steps. \nMost other models neglect the time aspect of representation.  Even synchrony mod(cid:173)\nels  fail  to  fully  utilize  time  (I  will  clarify  this  point  in  a  later  section).  In  this \npaper, a  model is  introduced  (the asynchronous binding mechanism)  that attempts \nto rectify  this  situation.  The asynchronous approach is  similar to the synchronous \napproach  but  is  more  effective  in  binding  complex  representations  and  exploiting \ntime. \n\n2  Utilizing time and  the brain \n\nRepresentational  power  can  be  greatly  increased  by  taking advantage of the  time \ndimension  of  representation.  For  instance,  a  telephone  would  need  thousands  of \nbuttons to make a  call if sequences of digits were  not used.  From  the standpoint of \na  neuron,  taking advantage of timing information increases processing capacity by \nmore  than  a  100  fold  [13] .  While  this  suggests  that  the  neural  code  might  utilize \nboth time and space resources, the neuroscience community has not yet arrived at a \nconsensus.  While it is known that the behavior of a postsynaptic neuron is  affected \nby the location and arrival times of dendritic input [10],  it is generally believed that \nonly the rate of firing (a neuron's firing rate is akin to the activation level of a unit in \na  connectionist network)  can code information,  as opposed to the timing of spikes, \nsince  neurons are noisy  devices  [14].  However,  findings  that  are  taken  as  evidence \nfor  rate  coding,  like  elevated  firing  rates  in  memory  retention  tasks  [8],  can  often \nbe reinterpreted as part of complex cortical events that extend through time [1].  In \naccord  with  this  view,  recent  empirical  findings  suggests  that the  timing of spikes \n(e.g.,  firing  patterns, intervals)  are also part of the neural code  [4,  16].  Contrary to \nthe rate based view  (which holds only that only the firing  rate of a  neuron encodes \ninformation),  these  studies  suggest  that  the  timing  of spikes  encodes  information \n(e.g.,  when  two  neurons  repeatedly  spike  together  it  signifies  something  different \nthan  when  they  fire  out  of  phase,  even  if their  firing  rates  are  identical  in  both \ncases). \n\nBehavioral findings  also appear  consistent  with  the  idea that time  is  used  to con(cid:173)\nstruct  complex  representations.  Behavioral  research  in  illusory  conjunction  phe(cid:173)\nnomena  [15],  and  sentence  processing  performance  [11]  all  suggest  that  bindings \nor relations are established through time,  with bindings  becoming more certain  as \nprocessing  proceeds.  In  summary,  early  in  processing  humans  can  gauge  which \nrepresentational elements  are relevant  while  remaining  uncertain  about  how  these \nelements are interrelated. \n\n3  Dynamic binding through  synchrony \n\nGiven  the  demands  placed  on  a  representational  system,  a  system  that  utilizes \ndynamic  binding  through  synchrony  would  seem  to  be  a  good  candidate  mental \narchitecture  (though,  as  we  will  see,  limitations  arise  when  representing  complex \nstructures) .  A synchronous binding account of our mental architecture is  consistent \n(at  a  general  level)  with  behavioral  findings,  the  intuition  that  complex  represen(cid:173)\ntations are distributed  across time,  and that neural temporal dynamics code infor(cid:173)\nmation.  Synchrony seems  to offer  the  power  to recombine  a  finite  set  of elements \nin  a  virtually  unlimited  number  of ways  (the  defining  characteristic  of a  discrete \ncombinatorial system). \n\n\f40 \n\nB.  C.  Love \n\nWhile synchrony models seem appropriate for  modeling certain behaviors, dynamic \nbinding  through  synchrony  does  not  seem  to  be  an  appropriate  mechanism  for \nestablishing  complex  recursive  bindings  [2]. \nIn  a  synchronous  dynamic  binding \nsystem,  the  distinction  between  a  slot  and  a  filler  is  lost,  since  bindings  are  not \ndirectional (i.e., which unit is a predicate and which unit is an argument is not clear). \nThe slot and the filler  simply share the same phase.  In this sense, the mechanism is \nmore akin  to a grouping mechanism than to a binding mechanism.  Grouping units \ntogether  indicates  that  the  units  are  a  part  of  the  same  representation,  but  does \nnot  sort out the relations among the units as  binding does. \n\nSynchrony  runs  into  trouble  when  a  unit  has  to  act  simultaneously  as  a  slot  and \na  filler.  For  instance,  to  represent  embedded  propositions  with  synchronous  bind(cid:173)\ning,  a  controller needs  to be added.  For instance, a  structure with embedding, like \nA-+B-+C,  could  be  represented  with  synchronous  firings  if  A  and  B  fired  syn(cid:173)\nchronously and then Band C  fired  synchronously.  Still, synchronous binding blurs \nthe distinction between a slot and a filler,  necessitating that A, B, and C  be marked \nas slots or fillers to unambiguously represent the simple A-+B-+C structure.  Notice \nthat B  must  be  marked as a  slot  when it fires  synchronously with  A,  but must  be \nmarked  as filler  when  it synchronously fires  with  C.  When representing embedded \nstructures, the synchronous approach becomes complicated (Le., simple connections \nare not  sufficient  to modulate  firing  patterns)  and rigid  (Le.,  parallelism and flexi(cid:173)\nbility are lost when a  unit has to be either a slot or a filler).  Ideally, units would  be \nable to act simultaneously as  slots  and fillers,  instead of alternating between these \ntwo  structural roles. \n\n4  The  asynchronous  approach \n\nWhile synchrony models  utilize some timing information, other valuable timing in(cid:173)\nformation  is  discarded  as  noise,  making it  difficult  to  represent  multiple  levels  of \nstructure.  If A  fired  slightly before  B, which fired  slightly before  C, asynchronous \ntiming  information  (ordering  information)  would  be  available.  This  ordering  in(cid:173)\nformation  allows  for  directional  binding  relations  and  alleviates  the  need  to  label \nunits as slots  or fillers.  Notice that B  can act simultaneously as  a  slot  and a  filler. \nDirectional bindings can  unambiguously represent  complex structures. \n\nPhase locking and wave  like  patterns of firing  need  not occur during asynchronous \nbinding.  For  instance,  the  firing  pattern  that  encodes  a  structure  like  A-+B-+C \ndoes  not  need  to be orderly  (Le.,  starting  with  A  and  ending with  C).  To  encode \nA-+B-+C, unit B's firing schedule must observably speed up (on average) after unit \nA  fires, while C's must speed up after B fires.  For example, if we only considered the \ntime window immediately after a  unit fires,  a firing sequence of B, C, no unit fires, \nA,  and  then  B  would  provide  evidence  for  the  structure A-+B-+C.  Of course,  if \nA, B, and C  fire  periodically with stochastic schedules that are influenced  by other \nunits'  firings,  spurious  binding  evidence  will  accrue  (e.g.,  occasionally,  C  will  fire \nand A  will fire  in the next time step).  Luckily, these accidents will  be less  frequent \nthan events that support the intended bindings.  As binding evidence is accumulated \nover  time,  binding errors will  become less  likely. \n\nInterestingly, the asynchronous mechanism can also represent structures through an \ninhibitory process  that mirrors the  excitatory process  described  above.  A-+B-+C \ncould be represented asynchronously if A  was  less likely to fire  after B  fired  and B \nwas  less  likely  to fire  after  C  fired.  An  inhibitory  (negative)  connection from  B  to \nA  is  in  some ways equivalent  to an excitatory  (positive)  connection  form  A  to B. \n\n\fUtilizing Time:  Asynchronous Binding \n\n41 \n\n4.1  The  mathematical expression of the model \n\nThe previous discussion  of the asynchronous approach can be formalized.  Below is \na  description of an asynchronous model  that I  have implemented. \n\n4.1.1  The anatomy of a  unit \n\nif Rti  ~ 1,  then  Oti+l  = 1,  otherwise  Oti + l  = O. \n\nIndividual units,  when  unaffected  by other units, will  fire  periodically when active: \n(1) \nwhere  Oti+l  is  the unit 's output  (at time i + 1) ,  Rti  is  the unit's  output refractory \nperiod which is randomly set (after the unit fires)  to a value drawn from the uniform \ndistribution between 0 and 1 and is  incremented at each time step by some constant \n(which was set to .1  in  all simulations).  Notice that a unit produces an output one \ntime step  after its output  refractory period reaches  threshold. \n\n4.1.2  A  unit's behavior in the presence of other  units \n\nA  unit  alters  its  output  refractory  if it  receives  a  signal  (via a  connection)  from  a \nunit that has just fired  (i.e.,  a  unit  with a  positive output) .  For example, if unit A \nfires  (its output is  1)  and there is  a  connection to unit B  of strength +.3, then  B's \noutput  refractory  will  be  incremented  by  +.3,  enabling  unit  B  to  fire  during  the \nnext time step or at least decreasing the time until  B  fires.  Alternatively, negative \n(inhibitory)  connections lower  refractory. \n\nTwo unconnected units will  tend to fire  independently of each other, providing little \nevidence  for  a  binding  relation.  Again,  over  a  small  time  window,  two  units  may \nfire  contiguously  by  chance,  but  over  many  firings  the  evidence  for  a  binding will \napproach zero. \n\n4.1.3 \n\nInterpreting firing patterns \n\nEvery  time  a  unit  fires,  it  creates  evidence  for  binding  hypotheses.  The  critical \nissue is  how  to collect  and evaluate evidence for  bindings.  There are many possible \nevidence  functions  that interpret  firing  patterns in  a  sensible  fashion.  One  simple \nfunction  is  to  have  evidence  for  two  units  binding  decrease  linearly  as  the  time \nbetween  their  firings  increases.  Evidence  is  updated  every  time  step  according  to \nthe following  equation: \n\nif p ~ (tuj - tu,)  ~ 1,  then ~Eij =  - (1/ p)  (tUj  - tuJ + (1/ p)  + 1. \n\n(2) \nwhere  p is  the  size  of the  window  for  considering  binding  evidence  (Le.,  if  p  is  5, \nthen  units firing  5 time steps apart still generate binding evidence), tUi  is  the most \nrecent time step unit Ui fired , and ~Eij is  the change in the amount of evidence for \nUi  binding to Uj .  Of course, some evidence will  be spurious.  The following  decision \nrule can  be used  to determine if two units share a  binding relation: \n\nif  (Eij  - E ji ) > k,  then Ui  binds to Uj . \n\n(3) \nwhere  k  is  some  threshold  greater than O.  This decision  rule is  formally  equivalent \nto the diffusion model which is  a type of random walk model  [6].  Equations 2 and 3 \nare very simple.  Other more  sophisticated methods can be used  for  collecting and \nevaluating binding evidence. \n\n4.2  Performance of the Asynchronous Mechanism \n\nIn this section, the asynchronous binding mechanism's  performance characteristics \nare  examined. \nIn  particular,  the  model's  ability  to  represent  tree  structures  of \n\n\f42 \n\nB. C.  Love \n\nOrganized by Branching \n\nOrganized by Depth \n\n~,-\n\n~ \n'go \n801 \n\n1/ \n(,' \n/i \nEranc~in8= \nranc  In  =  ~R  I: \nranc  In  =  ig \n\n00  \n~a) \n\n'C \n\naiBm~~ \n\n500 \n\n1000 \n\n1500 \n\nproCessing lime \n\n2000 \n\n2S00 \n\n500 \n\niil \n\n1000 \n\n1 SOD \n\nprocessing time \n\n2000 \n\n2500 \n\n~ \n'go \ng(\u00bb \n1J,iil \n~ \n\n~ R \n\nh \n\niil \n\n0 \n0 \n\nh \n8 \ng,iil \n-B \n~R \n~ \n~'\" \niil \n\n, /   -\",-\n-\n\n/.\", \n\n---.:;:;: ..... \n\nI \n/  / \n( \n/  1/ \n1/ \nI 1/ \n\n~o  /1/ \n\n~ \nl~ \n\ng.~ \n-B \n~R \n~ \n~o \n~'\" \niil \n\n~;:-:--\n\n/ . , '  \n\n~/ \n\n1/ \nIt' \nIf \n/: \n\nSOD \n\n1000 \n\n1500 \n\nprocessmg time \n\n2000 \n\n2S00 \n\n500 \n\n1000 \n\n1 SOD \n\n2000 \n\n2500 \n\nprocesSing lime \n\n8 \nh \n8 \n~2 \n-B \n\"0  \n.o~ \nc \n~ \n\ni g \niil \n\n/ \n\nI \n\nI \n\n--\n,- \" \n' \n, \n, \n, \nI  , \nI  , \n\" \nI  , \n, \nI  , \n\nI \nI \nI \nI \n\nI \nI \nI \n\n, \n\nI \nI \nI \nI \nI \nI \nI \nI \nI \n\nI \n\n-----::=--_.-------\n.. --\n.... \n\n~ \nh \n8 \n\n00  \ng>~ \n\n.o~ \n\n~o \n~ \n~o \n8.'\" \niil \n\n....-:,-:::.--\n\n.,..-:....:, .. \n.... \n/,' \n/, \n/\" \n1/ \n, \n1/ \n\nSOD \n\n1000 \n\n1500 \n\nprocessrng tuna \n\n2000 \n\n2500 \n\n500 \n\n1000 \n\n1500 \n\n2000 \n\n2500 \n\nprocessing lime \n\nFigure 1:  Performance curves for  the 9 different  structures are shown. \n\nvarying complexity was explored.  Tree structures can be used to represent complex \nrelational  information,  like  the  parse  of a  sentence.  An  advantage  of using  tree \nstructures  to  measure  performance  is  that  the  complexity  of a  tree  can  be  easily \ndescribed  by  two  factors .  Trees  can  vary  in  their  depth  and  branching.  In  the \nsimulations  reported  here,  trees  had  a  branching  factor  and  depth  of either  1,  2, \nor  3.  These  two  factors  were  crossed,  yielding  9  different  tree  structures.  This \ndesign  makes  it  possible  to  assess  how  the  model  processes  structures  of varying \ncomplexity.  One  sensible  prediction  (given  our  intuitions  about  how  we  process \nstructured representations) is that trees with greater depth and branching will  take \nlonger to represent. \nIn  the simulations reported here, both positive and negative connections were  used \nsimultaneously.  For instance,  in  a  tree structure,  if A  was  intended to  bind  to  B , \nA 's  connection  to B  was  set to  +.1  and  B 's  connection  to A  was  set  to  - .1.  The \ncombination of both connection  types yields the best performance. \nIn  these simulations both excitatory and inhibitory binding connection values  were \nset relatively low  (all binding connections were set to size .1),  providing a strict test \nof the  model 's  sensitivity.  The low  connection  values  prevented  bound  units  from \nestablishing  tight  couplings  (characteristic  of  bound  units  in  synchrony  models) . \nFor example,  with an excitatory connection from  A  to B  of .1,  A 's firing  does  not \nensure  that  B  will  fire  in  the  next  time  step  (or  the  next  few  time  steps  for  that \nmatter) .  The  lack  of a  tight  coupling  requires  the  model  to  be  more  sensitive  to \nhow  one  unit  affects  another unit's firing  schedule.  With all  connections of size  .1, \nfiring  patterns  representing complex  structures will  appear  chaotic  and  unorderly. \n\n\fUtilizing Time:  Asynchronous Binding \n\n43 \n\nIn  all  simulations,  the  time  window  for  considering  binding  evidence  was  5  time \nsteps  (i.e.,  Equation 2 was  used  with p  set  to 5). \n\nPerformance was  measured by  calculating the percent bindings  correct.  The bind(cid:173)\nings the model settled upon were  determined by calculating the number of bindings \nin  the  intended  structure.  The  model  then  created  a  structure  with  this  number \nof bindings  (this  is  equivalent  to  treating  k  like  a  free  parameter),  choosing  the \nbindings it believed  to be  most likely  (based on accrued evidence).  The model  was \ncorrect  when  the  bindings  it  believed  to be  present  corresponded  to  the  intended \nbindings. \n\nFor each of the 9  structures  (3  levels of depth  by  3 levels  of branching),  hundreds \nof trials  were  run  (the  mechanism  is  stochastic)  until  performance  curves  became \nsmooth.  The  model's  performance  was  measured  every  25th  time  step  up  to  the \n2500th  time  step.  Performance  (averaged  across trials)  for  all  structures is  shown \nin  Figure  1.  Any  viewable  difference  between  performance  curves  is  statistically \nsignificant.  As predicted, there was a main effect for both branching and depth.  The \nleft  panels of Figure 1 organize the data by branching factor,  revealing a systematic \neffect of depth.  The right panel is organized by depth and reveals a systematic effect \nof branching.  As  structures  become  more  complex,  they  appear  to take  longer  to \nrepresent. \n\n5  Conclusions \n\nThe  ability  to  effectively  represent  and  manipulate  complex  knowledge  structures \nis  central  to  human  cognition  [3].  Connectionists  models  generally  lack  this  abil(cid:173)\nity,  making it difficult  to  give  a  connectionist  account  of our  mental  architecture. \nThe asynchronous mechanism  provides  a  connectionist framework  for  representing \nstructures in  a  way  that is  biologically,  computationally, and behaviorally feasible. \nThe mechanism establishes bindings over time using simple neuron-like computing \nelements.  The  asynchronous approach  treats bindings  as  directional  and  does  not \nblur the distinction  between  a  slot  and  a  filler  as the synchronous approach does. \n\nThe asynchronous mechanism builds representations that can be differentiated from \neach  other,  capturing important  differences  between  representational  states.  The \nrepresentations  that  the  asynchronous  mechanism  builds  also  can  be  easily  com(cid:173)\npared and  commonalities  between  disparate  states  can  be  extracted  by  analogical \nprocesses,  allowing  for  generalization  and feature  discovery.  In  fact,  an  analogical \n(i.e., graph) matcher has been built using the asynchronous mechanism [7].  Variants \nof the model need to be explored.  This paper only outlines the essentials of the ar(cid:173)\nchitecture.  Synchronous dynamic binding models were partly inspired from work in \nneuroscience.  Hopefully the asynchronous dynamic binding model will  now  inspire \nneuroscience researchers.  Some evidence for  rate-based firing  (spatially based)  neu(cid:173)\nral  codes has been  revisited  and viewed  as  consistent with  more complex temporal \ncodes  [1];  perhaps  evidence  for  synchrony  can  be  subjected  to more  sophisticated \nanalyses and be better construed as  evidence for  the asynchronous mechanism. \n\nAcknow ledgments \n\nThis work was supported by the Office of Naval Research under the National Defense \nScience and Engineering Graduate Fellowship Program.  I would like  to thank John \nHummel for  his  helpful  comments. \n\n\f44 \n\nReferences \n\nB.  C.  Love \n\n[1]  M.  Abeles,  H.  Bergman,  E.  Margalit,  and  E.  Vaadia.  Spatiotemporal  firing \npatterns in the frontal cortex of behaving monkeys.  Journal of Neurophysiology, \n70:1629- 1638,1993. \n\n[2]  E. Bienenstock.  Composition. In A.  Aertsen and V.  Braitenberg, editors, Brain \nTheory:  Biological  Basis  and  Computational  Principles.  Elsevier,  New  York, \n1996. \n\n[3]  D.  Gentner  and  A.  B.  Markman.  Analogy-watershed  or  waterloo?  structural \nalignment  and  the  development  of connectionist  models  of analogy.  In  S.  J. \nHanson, J. D.  Cowan, and C. L. Giles, editors,  Advances in Neural Information \nProcessing Systems 5,  pages 855-862. Morgan Kaufman Publishers, San Mateo, \nCA,1993. \n\n[4]  C. M.  Gray and W. Singer. Stimulus specific neuronal oscillations in orientation \ncolumns of cat visual cortex.  Proceedings  of the  National Academy of Sciences, \nUSA,  86:1698- 1702, 1989. \n\n[5]  J.  E.  Hummel  and  I.  Biederman.  Dynamic  binding  in  a  neural  network  for \n\nshape recognition.  Psychological  Review,  99:480-517, 1992. \n\n[6]  D.R.J. Laming.  Information  theory  of choice  reaction  time.  Oxford University \n\nPress,  New  York,  1968. \n\n[7]  B.  C.  Love.  Asynchronous connectionist  binding.  (Under  Review),  1998. \n[8]  Y.  Miyashita  and  H.  S.  Chang.  Neuronal  correlate  of  pictorial  short-term \n\nmemory in  primate temporal cortex.  Nature,  331:68- 70,  1988. \n\n[9]  J. Pollack.  Recursive distributed representations.  Artificial Intelligence, 46:77-\n\n105,  1990. \n\n[10]  W.  Rail.  Dendritic  locations  of  synapses  and  possible  mechanisms  for  the \nmonosynaptic  EPSP  in  motorneurons.  Journal  of Neurophysiology,  30:1169-\n1193,1967. \n\n[11]  R.  Ratcliff  and  G.  McKoon.  Speed  and  accuracy  in  the  processing  of false \nstatements about semantic  information.  Journal  of Experimental  Psychology: \nLearning,  Memory,  fj Cognition,  8:16- 36,  1989. \n\n[12]  L.  Shastri and V.  Ajjanagadde.  From simple associations to systematic reason(cid:173)\n\ning:  A  connectionist  representation  of rules,  variables,  and  dynamic  binding \nusing temporal synchrony.  Behavioral  and  Brain  Sciences,  16:417- 494,  1993. \n\n[13]  W.  Softky.  Fine  analog  coding  minimizes  information  transmission.  Neural \n\nNetworks,  9:15- 24,  1996. \n\n[14]  A.  C.  Tang  and  T.  J.  Sejnowski.  An  ecological  approach tp  the  neural  code. \nIn  Proceedings  of the  Nineteenth  Annual  Conference  of the  Cogntive  Science \nSociety,  page 852,  Mahwah, NJ,  1996. Erlbaum. \n\n[15]  A.  Treisman and H.  Schmidt.  Illusory conjunctions in the perception of objects. \n\nCognitive  Psychology,  14:107- 141,1982. \n\n[16]  E. Vaadia, I. Haalman, M.  Abeles,  and H.  Bergman.  Dynamics of neuronal in(cid:173)\nteractions in monkey cortex in relation to behavioral events.  Nature,  2373:515-\n518,  1995. \n\n[17]  C.  von  der  Malsburg.  The  correlation  theory  of  brain  function.  Technical \n\nReport  81-2,  Max-Planck-Institut for  Biophysical Chemistry,  G6ttingen,  Ger(cid:173)\nmany,  1981. \n\n\f", "award": [], "sourceid": 1628, "authors": [{"given_name": "Bradley", "family_name": "Love", "institution": null}]}