{"title": "Programmable Analog Pulse-Firing Neural Networks", "book": "Advances in Neural Information Processing Systems", "page_first": 671, "page_last": 677, "abstract": null, "full_text": "PROGRAMMABLE ANALOG PULSE-FIRING \n\nNEURAL NETWORKS \n\n671 \n\nAlan F. Murray \nDept. of Elec. Eng., \nUniversity of Edinburgh, University of Edinburgh, \nMayfield Road, \nEdinburgh, EH9 3JL \nUnited Kingdom. \n\nEdinburgh, EH9 3JL \n\nUnited Kingdom. \n\nAlister Hamilton \n\nDept. of Elec. Eng., \n\nMayfield Road, \n\nLionel Tarassenko \nDept. of Eng. Science, \nUniversity of Oxford, \nParks Road, \nOxford, OX1 3PJ \nUnited Kingdom. \n\nABSTRACT \n\nWe describe pulse - stream firing integrated circuits that imple(cid:173)\nment asynchronous analog neural networks. Synaptic weights are \nstored dynamically, and weighting uses time-division of the \nneural pulses from a signalling neuron to a receiving neuron. \nMOS transistors in their \"ON\" state act as variable resistors to \ncontrol a capacitive discharge, and time-division is thus achieved \nby a small synapse circuit cell. The VLSI chip set design uses \n2.5J.1.m CMOS technology. \n\nINTRODUCTION \n\nNeural network implementations fall into two broad classes - digital [1,2] \nand analog (e.g. [3,4]). The strengths of a digital approach include the \nability to use well-proven design techniques, high noise immunity, and the \nability to implement programmable networks. However digital circuits are \nsynchronous, while biological neural networks are asynchronous. Further(cid:173)\nmore, digital multipliers occupy large areas of silicon. Analog networks \noffer asynchronous behaviour, smooth neural activation and (potentially) \nsmall circuit elements. On the debit side, however, noise immunity is low, \narbitrary high precision is not possible; and no reliable \"mainstream\" analog \nnonvolatile memory technology exists. \n\nMany analog VLSI implementations are nonprogrammable, and therefore \nhave fixed functionality. For instance, subthreshold MOS devices have been \nused to mimic the nonlinearities of neural behaviour, in implementing Hop(cid:173)\nfield style nets [3] , associative memory [5] , visual processing functions [6] , \nand auditory processing [7]. Electron-beam programmable resistive inter(cid:173)\nconnects have been used to represent synaptic weights between more con(cid:173)\nventional operational-amplifier neurons [8,4]. \n\nWe describe programmable analog pulse-firillg neural networks that use 00-\nchip dynamic analog storage capacitors to store synaptic weights, currently \n\n\f672 \n\nHamilton, Murray and Tarassenko \n\nrefreshed from an external RAM via a Digital -Analog converter. \n\nPULSE-FIRING NEURAL NETWORKS \n\nA pulse-firing neuron, i is a circuit which signals its state, V. by generating \na stream of 0-5V pulses on its output. The pulse rate R.' varies from 0 \nwhen neuron i is OFF to R.(max) when neuron i is fully ION. Switching \nbetween the OFF and ON stAtes is a smooth transition in output pulse rate \nbetween these lower and upper limits. In a previous system, outlined below, \nthe synapse allows a proportion of complete presynaptic neural pulses V. to \nbe added (electrically OR-ed) to its output. A synaptic \"gating\" function, \ndetermined by T .. , allowed bursts of complete pulses through the synapse. \nMoving down a'l column of synapses, therefore, we see an ever more \ncrowded asynchronous mass of pulses, representing the aggregated activity \nIn the system that forms the substance of this \nof the receiving neuron. \npaper, a proportion (determined by T .. ) of each presynaptic pulse is passed \nto the postsynaptic summation. \n\nl] \n\nINTEGRATOR \n\nRING OSCLLATOR \n\n~------------------------~I~I --------------------------------~ \n\n11111111111111111111111111 \n\nExcitatory \n\n\"-.... \n\nA~ \n\n~ \n\n11111111111111111111111111 \n\nI \n\nI \n\nI \n\nI \n\nIII! \n\nWibitory \nActivity XI \n\nNEURON CIRCUIT \n\nPll.SE GENERATOR \n\nFigure 1. Neuron Circuit \n\nFigure 1 shows a CMOS implementation of the pulse-firing neuron function \nin a system where excitatory and inhibitory pulses are accumulated on \nseparate channels. The output stage of the neuron consists of a \"ring oscilla(cid:173)\ntor\" - a feedback circuit containing an odd number of logic inversions, with \nthe loop broken by a NAND gate, controlled by a smoothly varying voltage \nrepresenting the neuron's total activity, \nj=\" -1 \n\nXj = L TjjV, \n\nj=O \n\n\fProgrammable Analog Pulse-Firing Neural Networks \n\n673 \n\nThis activity is increased or decreased by the dumping or removal of charge \npackets from the \"integrator\" circuit. The arrival of an excitatory pulse \ndumps charge, while an inhibitory pulse removes it. Figure 2 shows a device \nlevel (SPICE) simulation of the neuron circuit. A strong excitatory input \ncauses the neural potential to rise in steps and the neuron turns ON. Subse(cid:173)\nquent inhibitory pulses remove charge packets from the integrating capacitor \nat a higher rate, driving the neuron potential down and switching the neu(cid:173)\nron OFF. \n\n5 \n\nOl------J \n\n5 \n\nN euro n Output \n\nNeural Potential (V4) \n\n'0 \n> O---------J \n\nInhibitory input \n\n5 \n\no \n\no \n\nExcitatory input \n\n9 \n\nFigure 2. SPICE Simulation of Neuron \n\nSYNAPSE CIRCUIT - USING CHOPPING CLOCKS \nIn an earlier implementation, \"chopping clocks\" were introduced - synchro(cid:173)\nnous to one another, but asynchronous to the neural firing. One bit of the \n(digitally stored) weight T .. indicates its sign, and each other bit of precision \nis represented by a chopping clock. The clocks are non-overlapping, the \nMSB clock is high for lh of the time, the next for % of the time, etc. These \nclocks are used to gate bursts of pulses such that a fraction T .. of the pulses \nare passed from the input of the synapse to either the excita\u00a5ory or inhibi(cid:173)\ntory output channel. \n\n\f674 \n\nHamilton, Murray and Tarassenko \n\nCHOPPING CLOCK SYSTEM - PROBLEMS \nA custom VLSI synaptic array has been constructed [9] with the neural \nfunction realised in discrete SSI to allow flexibility in the choice of time con(cid:173)\nstants. The technique has proven successful, but suffers from a number of \nproblems:-\n\n- Digital gating (\"using chopping clocks\") is clumsy \n- Excitation and Inhibition on separate lines - bulky \n- Synapse complicated and of large area \n- < 100 synapses per chip \n- < 10 neurons per chip \n\nIn order to overcome these problems we have devised an alternative arith(cid:173)\nmetic technique that modulates individual pulse widths and uses analog \ndynamic weight storage. This results in a much smaller synapse. \n\n< w \u00bb \n\nL \n\nxTij \n\nWxTij \n\n-----,I L \n\nIncrement \nActivity \n\nFigure 3. Pulse Multiplication \n\nSYNAPSE CIRCUIT - PULSE MULTIPLICATION \nThe principle of operation of the new synapse is illustrated in Figure 3. \nEach presynaptic pulse of width W is modulated by the synaptic weight T .. \nsuch that the resulting postsynaptic pulse width is \nlJ \n\nW.Tij \n\nThis is achieved by using an analog voltage to modulate a capacitive \ndischarge as illustrated in Figure 4. The presynaptic pulse enters a CMOS \ninverter whose positive supply voltage (V dd) is controlled by T ... The capa(cid:173)\ncitor is nominally charged to Vdd, but begins to discharge at a gonstant rate \nwhen the input pulse arrives. When the voltage on the capacitor falls below \nthe threshold of the following inverter, the synapse output goes high. At the \nend of the presynaptic pulse the capacitor recharges rapidly and the synapse \noutput goes low, having output a pulse of length W.T \". The circuit is now \n\nlJ \n\n\fProgrammable Analog Pulse-Firing Neural Networks \n\n675 \n\nready for the next presynaptic pulse. This mechanism gives a linear rela(cid:173)\ntionship between multiplier Wand inverter supply voltage, Vdd. \n\nTik \n\nDetermines Vdd \n\nfor inverter \n\nVk \n\nIre1 \n\nFigure 4. Improved Synapse Circuit \n\nFULL SYNAPSE \nSynaptic weight storage is achieved using dynamic analog storage capacitors \nrefreshed from off-chip RAM via a Digital-Analog converter. A CMOS \nactive-resistor inverter is used as a buffer to isolate the storage capacitor \nfrom the multiplier circuit as shown in the circuit diagram of a full synapse \nin Figure 5. \n\nVdd \n\nTIt \n-11-\n\nSYNAPTIC \nWEIGHT TIt \n\nI \nT \n\nPRESYNAPTIC \nSTATE Vk \n\nBIAS VOl. TAGE \n\nFigure s. Full Synapse Circuit \n\nA capacitor distributed over a column of synaptic outputs stores neural \nactivity, x., as an analog voltage. The range over which the synapse voltage \n- pulse tithe multiplier relationship is linear is shown in Figure 6. This wide \n\n\f676 \n\nHamilton, Murray and Tarassenko \n\n(:=c2V) range may be used to implement inhibition and excitation in a single \nsynapse, by \"splitting\" the range such that the lower volt (l-2V) represents \ninhibition, and the upper volt (2-3V) excitation. Each presynaptic pulse \nremoves a packet of charge from the activity capacitor while each postsynap(cid:173)\ntic pulse adds charge at twice the rate. In this way, a synaptic weight voltage \nof 2V, giving a pulse length multiplier of lh, gives no net change in neuron \nactivity x .. The synaptic weight voltage range 1-2V therefore gives a net \nreduction'in neuron activity and is used to represent inhibition, the range \n2-3V gives a net increase in neuron activity and is used to represent excita(cid:173)\ntion. \n\n1.0 \n\n0.6 -\n\no . 4 - ---- -- ------- ---------- - --\n\n0.2 \no \n\n- --- -- - -- -- - -\n\n. . . . \n. -,' \n.' \no 123 4 \nSynapse Voltage Tij (V) \n\n5 \n\nFigure 6. Multiplier Linearity \n\nThe resulting synapse circuit implements excitation and inhibition in 11 \ntransistors per synapse. It is estimated that this technique will yield more \nthan 100 fu.ly programmable neurons per chip. \n\nFURTHER WORK \n\nThere is still much work to be done to refine the circuit of Figure 5 to \noptimise (for instance) the mark-space ratio of the pulse firing and the effect \nof pulse overlap, and to minimise the power consumption. This will involve \nthe creation of a custom pulse-stream simulator, implemented directly as \ncode, to allow these parameters to be studied in detail in a way that probing \nan actual chip does not allow. Finally, as Hebbian- (and modified Hebbian \n- for instance [10]) learning schemes only require a synapse to \"know\" the \npresynaptic and postsynaptic states, we are able to implement it on-chip at \nlittle cost, as the chip topology makes both of these signals available avail(cid:173)\nable to the synapse locally. This work introduces as many exciting possibili(cid:173)\nties for truly autonomous systems as it does potential problems! \n\n\fProgrammable Analog Pulse-Firing Neural Networks \n\n677 \n\nAcknowledgements \n\nThe authors acknowledge the support of the Science and Engineering \nResearch Council (UK) in the execution of this work. \n\nReferences \n\n1. A. F. Murray, A. V. W. Smith, and Z. F. Butler, \"Bit - Serial Neural \nNetworks,\" Neural Information Processing Systems (Proc. 1987 NIPS \nConference), p. 573, 1987. \n\n2. \n\n3. \n\nS. C. J. Garth, \"A Chipset for High Speed Simulation of Neural Net(cid:173)\nwork Systems,\" IEEE Conference on Neural Networks, San Diego, vol. \n3, pp. 443 - 452, 1987. \n\nM~ A. SiviloUi, M. R. Emerling, and C. A. Mead, \"VLSI Architec(cid:173)\ntures for Implementation of Neural Networks,\" Proc. AlP Conference \non Neural Networks for Computing, Snowbird, pp. 408 - 413, 1986. \n\n4. H. P. Graf, L. D. Jackel, R. E. Howard, B. Straughn, J. S. Denker, \nW. Hubbard, D. M. Tennant, and D. Schwartz, \"VLSI Implementa(cid:173)\ntion of a Neural Network Memory with Several Hundreds of Neu(cid:173)\nrons,\" Proc. AlP Conference on Neural Networks for Computing, \nSnowbird, pp. 182 - 187, 1986. \n\n5. M. Sivilotti, M. R. Emerling, and C. A. Mead, \"A Novel Associative \nMemory Implemented Using Collective Computation,\" Chapel Hill \nConf. on VLSI, pp. 329 - 342, 1985. \n\n6. M. A. Sivilotti, M. A. Mahowald, and C. A. Mead, \"Real - Time \n\nVisual Computations Using Analog CMOS Processing Arrays,\" Stan(cid:173)\nford VLSI Confeence, pp. 295-312, 1987. \n\n7. C. A. Mead, in Analog VLSI and Neural Systems, Addison-Wesley, \n\n1988. \n\n8. W. Hubbard, D. Schwartz, J. S. Denker, H. P. Graf, R. E. Howard, \nL. D. Jackel, B. Straughn, and D. M. Tennant, \"Electronic Neural \nNetworks,\" Proc. AlP Conference on Neural Networks for Computing, \nSnowbird, pp. 227 - 234, 1986. \n\n9. A. F. Murray, A. V. W. Smith, and L. Tarassenko, \"Fully(cid:173)\n\nProgrammable Analogue VLSI Devices for the Implementation of \nNeural Networks,\" Int. Workshop on VLSI for Artificial Intelligence, \n1988. \n\n10. S. Grossberg, \"Some Physiological and Biochemical Consequences of \nPsychological Postulates,\" Proc. Natl. Acad. Sci. USA, vol. 60, pp. 758 \n- 765, 1968. \n\n\f", "award": [], "sourceid": 187, "authors": [{"given_name": "Alister", "family_name": "Hamilton", "institution": null}, {"given_name": "Alan", "family_name": "Murray", "institution": null}, {"given_name": "Lionel", "family_name": "Tarassenko", "institution": null}]}