Part of Neural Information Processing Systems 0 (NIPS 1987)
Paolo Gaudiano
Patterns of activity over real neural structures are known to exhibit time(cid:173)
dependent behavior. It would seem that the brain may be capable of utilizing temporal behavior of activity in neural networks as a way of performing functions which cannot otherwise be easily implemented. These might include the origination of sequential behavior and the recognition of time-dependent stimuli. A model is presented here which uses neuronal populations with recurrent feedback connec(cid:173) tions in an attempt to observe and describe the resulting time-dependent behavior. Shortcomings and problems inherent to this model are discussed. Current models by other researchers are reviewed and their similarities and differences discussed.
METHODS / PRELIMINARY RESULTS
In previous papers,[2,3] computer models were presented that simulate a net con(cid:173)
sisting of two spatially organized populations of realistic neurons. The populations are richly interconnected and are shown to exhibit internally sustained activity. It was shown that if the neurons have response times significantly shorter than the typical unit time characteristic of the input patterns (usually 1 msec), the populations will exhibit time-dependent behavior. This will typically result in the net falling into a limit cycle. By a limit cycle, it is meant that the population falls into activity patterns during which all of the active cells fire in a cyclic, periodic fashion. Although the period of firing of the individual cells may be different, after a fixed time the overall population activity will repeat in a cyclic, periodic fashion. For populations organized in 7x7 grids, the limit cycle will usually start 20~200 msec after the input is turned off, and its period will be in the order of 20-100 msec.
The point ofinterest is that ifthe net is allowed to undergo synaptic modifications by means of a modified hebbian learning rule while being presented with a specific spatial pattern (i.e., cells at specific spatial locations within the net are externally stimulated), subsequent presentations of the same pattern with different temporal characteristics will cause the population to recall patterns which are spatially identical (the same cells will be active) but which have different temporal qualities. In other words, the net can fall into a different limit cycle. These limit cycles seem to behave as attractors in that similar input patterns will result in the same limit cycle, and hence each distinct limit cycle appears to have a basin of attraction. Hence a net which can only learn a small
© American Institute of Physics 1988
298
number of spatially distinct patterns can recall the patterns in a number of temporal modes. If it were possible to quantitatively discriminate between such temporal modes, it would seem reasonable to speculate that different limit cycles could correspond to different memory traces. This would significantly increase estimates on the capacity of memory storage in the net.
It has also been shown that a net being presented with a given pattern will fall and stay into a limit cycle until another pattern is presented which will cause the system to fall into a different basin of attraction. If no other patterns are presented, the net will remain in the same limit cycle indefinitely. Furthermore, the net will fall into the same limit cycle independently of the duration of the input stimulus, so long as the input stimulus is presented for a long enough time to raise the population activity level beyond a minimum necessary to achieve self-sustained activity. Hence, if we suppose that the net "recognizes" the input when it falls into the corresponding limit cycle, it follows that the net will recognize a string of input patterns regardless of the duration of each input pattern, so long as each input is presented long enough for the net to fall into the appropriate limit cycle. In particular, our system is capable of falling into a limit cycle within some tens of milliseconds. This can be fast enough to encode, for example, a string of phonemes as would typically be found in continuous speech. It may be possible, for instance, to create a model similar to Rumelhart and McClelland's 1981 model on word recognition by appropriately connecting multiple layers of these networks. If the response time of the cells were increased in higher layers, it may be possible to have the lowest level respond to stimuli quickly enough to distinguish phonemes (or some sub-phonemic basic linguistic unit), then have populations from this first level feed into a slower, word-recognizing population layer, and so On. Such a model may be able to perform word recognition from an input consisting of continuous phoneme strings even when the phonemes may vary in duration of presentation.