{"title": "Reorganisation of Somatosensory Cortex after Tactile Training", "book": "Advances in Neural Information Processing Systems", "page_first": 82, "page_last": 88, "abstract": "", "full_text": "Reorganisation of Somatosensory Cortex after \n\nTactile Training \n\nRasmus S. Petersen \n\nJohn G. Taylor \n\nCentre for Neural Networks, King's College London \n\nStrand, London WC2R 2LS, UK \n\nAbstract \n\nTopographic maps in primary areas of mammalian cerebral cortex reor(cid:173)\nganise as a result of behavioural training. The nature of this reorgani(cid:173)\nsation seems consistent with the behaviour of competitive neural net(cid:173)\nworks, as has been demonstrated in the past by computer simulation. \nWe model tactile training on the hand representation in primate somato(cid:173)\nsensory cortex, using the Neural Field Theory of Amari and his col(cid:173)\nleagues. Expressions for changes in both receptive field size and mag(cid:173)\nnification factor are derived, which are consistent with owl monkey ex(cid:173)\nperiments and make a prediction which goes beyond them. \n\n1. INTRODUCTION \nThe primary cortical areas of mammals are now known to be plastic throughout life; re(cid:173)\nviewed recently by Kaas(1995). The problem of how and why the underlying learning \nprocesses work is an exciting one, for which neural network modelling appears well \nsuited. In this contribution, we model the long-term effects of tactile training (Jenkins et \nai, 1990) on the functional organisation of monkey primary somatosensory cortex, by \nperturbing a topographic net (Takeuchi and Amari, 1979). \n\n1.1 ADAPTATION IN ADULT SOMATOSENSORY CORTEX \n\nLight touch activates skin receptors which in primates are mapped, largely topographi(cid:173)\ncally, in area 3b. In a series of papers, Merzenich and colleagues describe how area 3b \nbecomes reorganised following peripheral nerve damage (Merzenich et ai, 1983a; 1983b) \nor digit amputation (Merzenich et ai, 1984). The underlying learning processes may also \nexplain the phenomenon of phantom limb \"telescoping\" (Haber, 1955). Recent advances \nin brain scanning are beginning to make them observable even in the human brain \n(Mogilner et ai, 1993). \n\n1.2 ADAPTATION ASSOCIATED WITH TACTILE TRAINING \n\nJenkins et al trained owl monkeys to maintain contact with a rotating disk. The apparatus \nwas arranged so that success eventually involved touching the disk with only the digit \ntips. Hence these regions received selective stimulation. Some time after training had \nbeen completed electro-physiological recordings were made from area 3b. These re(cid:173)\nvealed an increase in Magnification Factor (MF) for the stimulated skin and a decrease in \n\n\fReorganization of Somatosensory Cortex after Tactile Training \n\n83 \n\nthe size of Receptive Fields (RFs) for that region. The net territory gained for light touch \nof the digit tips came from area 3a and/or the face region of area 3b, but details of any \nchanges in these representations were not reported. \n\n2. THEORETICAL FRAMEWORK \n\n2.1 PREVIOUS WORK \n\nTakeuchi and Amari(1979), Ritter and Schulten(1986), Pearson et al(1987) and Grajski \nand Merzenich( 1990) have all modelled amputationldenervation by computer simulation \nof competitive neural networks with various Hebbian weight dynamics. Grajski and \nMerzenich(1990) also modelled the data of Jenkins et al. We build on this research \nwithin the Neural Field Theory framework (Amari, 1977; Takeuchi and Amari, 1979; \nAmari, 1980) of the Neural Activity Model of Willshaw and von der Malsburg(1976). \n\n2.2 NEURAL ACTIVITY MODEL \n\nConsider a \"cortical\" network of simple, laterally connected neurons. Neurons sum in(cid:173)\nputs linearly and output a sigmoidal function of this sum. The lateral connections are \nexcitatory at short distances and inhibitory at longer ones. Such a network is competi(cid:173)\ntive: the steady state consists of blobs of activity centred around those neurons locally re(cid:173)\nceiving the greatest afferent input (Amari, 1977). The range of the competition is limited \nby the range of the lateral inhibition. \n\nSuppose now that the afferent synapses adapt in a Hebbian manner to stimuli that are lo(cid:173)\ncalised in the sensory array; the lateral ones are fixed. Willshaw and von der Mals(cid:173)\nburg(1976) showed by computer simulation that this network is able to form a topo(cid:173)\ngraphic map of the sensory array. Takeuchi and Amari( 1979) amended the Willshaw(cid:173)\nMalsburg model slightly: neurons possess an adaptive firing threshold in order to prevent \nsynaptic weight explosion, rather than the more usual mechanism of weight normalisa(cid:173)\ntion. They proved that a topographic mapping is stable under certain conditions. \n\n2.3 TAKEUCHI-AMARI THEORY \nConsider a one-dimensional model. The membrane dynamics are: \n\nau(~y,t) = -u(x,y,t)+ f s(x,y' ,t)a(y- y')dy'(cid:173)\n\nso(x,t)ao + f w(x-x')f[u(x' ,y,t)]dx'-h \n\n(1) \n\nHere u(x,y,t) is the membrane potential at time I for point x when a stimulus centred at y is \nbeing presented; h is a positive resting potential; w(z) is the lateral inhibitory weight be(cid:173)\ntween two points in the neural field separated by a distance z - positive for small Izl and \nnegative for larger Izl; s(x,y,t) is the excitatory synaptic weight from y to x at time I and \nsr/X,I) is an inhibitory weight from a tonically active inhibitory input aD to x at time t - it is \nthe adaptive firing threshold . f[u] is a binary threshold function that maps positive mem(cid:173)\nbrane potentials to 1 and non-positive ones to O. \n\nIdealised, point-like stimuli are assumed, which \"spread out\" somewhat on the sensory \nsurface or subcortically. The spreading process is assumed to be independent of y and is \ndescribed in the same coordinates. It is represented by the function a(y-y'), which de(cid:173)\nscribes the effect of a point input at y spreading to the point y'. This is a decreasing, posi(cid:173)\ntive, symmetric function of Iy-y'l. With this type of input, the steady-state activity of the \nnetwork is a single blob, localised around the neuron with maximum afferent input. \n\n\f84 \n\nR. S. PETERSEN, J. O. TAYLOR \n\nThe afferent synaptic weights adapt in a leaky Hebbian manner but with a time constant \nmuch larger than that of the membrane dynamics (1). Effectively this means that learning \noccurs on the steady state of the membrane dynamics. The following averaged weight \ndynamics can be justified (Takeuchi and Amari, 1979; Geman 1979): \n\nas( x, y, t) \n\n( \n\nat =-s x,y,t)+b p(y' a Y-Y')f u(x,y' dy' \n\nJ) ( \n\n[A \n\n)] \n\naso(~y,t) =-so(x,y,t)+b' aoJ p(y')f[u(x,y')]dy' \n\n(2) \n\nwhere r1(x,y') is the steady-state of the membrane dynamics at x given a stimulus at y' and \np(y') is the probability of a stimulus at y '; b, b' are constants. \n\nEmpirically, the \"classical\" Receptive Field (RF) of a neuron is defined as the region of \nthe input field within which localised stimulation causes change in its activity. This con(cid:173)\ncept can be modelled in neural field theory as: the RF of a neuron at x is the portion of the \ninput field within which a stimulus evokes a positive membrane potential (inhibitory RFs \nare not considered). If the neural field is a continuous map of the sensory surface then the \nRF of a neuron is fully described by its two borders rdx), rix), defined formally: \n\ni = 1,2 \n\n(3) \n\nwhich are illustrated in figure 1. \n\nLet RF size and RF position be denoted respectively by the functions rex) and m(x), which \nrepresent experimentally measurable quantities. In terms of the border functions they can \nbe expressed: \n\nr(x) = r2 (x) - r1 (x) \nm(x) = -} (rl {x} + r2 (x)) \n\ny \n\n~--------------------------- x \n\n(4) \n\nas \n\nRF \nFigure \n1: \nboundaries \na \nfunction of position \nin the neural field, \nfor a \ntopographi(cid:173)\ncally ordered net(cid:173)\nwork. Only the re(cid:173)\ngion \nin-between \nrdx) and rix) has \nsteady-\npositive \nstate \nmembrane \nr1(x,y). \npotential \nrdx) and rix) are \ndefined \nthe \ncondition \nr1(x,r;(x))=O \ni=J,2. \n\nfor \n\nby \n\nUsing (1), (2) and the definition (3), Takeuchi and Amari(1979) derived dynamical equa(cid:173)\ntions for the change in RF borders due to learning. In the case of uniform stimulus prob(cid:173)\nability, they found solutions for the steady-state RF border functions. With periodic \nboundary conditions, the basic solution is a linear map with constant RF size: \n\n\fReorganization of Somatosensory Cortex after Tactile Training \n\n85 \n\nr(x) = ro = const \nm(x) = px ++ro \n\n) \n\nx = px \n\nuni ( \nrl \nr~tni (x) = px+ ro \n\n(5) \n\nThis means that both RF size and activity blob size are uniform across the network and \nthat RF position m(x) is a linear function of network location. (The value of p is deter(cid:173)\nmined by boundary conditions; ro is then determined from the joint equilibrium of (I), \n(2\u00bb. The inverse of the RF position function, denoted by m-l(y), is the centre of the corti(cid:173)\ncal active region caused by a stimulus centred at y. The change in m-l(y) over a unit inter(cid:173)\nval in the input field is, by empirical definition, the cortical magnification factor (MF). \nHere we model MF as the rate of change of m-l(y). The MF for the system described by \n(5) is: \n\n_I ( ) \n\nd \n-I \n-m y =p \ndy \n\n(6) \n\n3. ANALYSIS OF TACTILE TRAINING \n\n3.1 TRAINING MODEL AND ASSUMPTIONS \n\nJenkins et aI's training sessions caused an increase in the relative frequency of stimulation \nto the finger tips, and hence a decrease in relative frequency of stimulation elsewhere. \nOver a long time, we can express this fact as a localised change in stimulus probability \n(figure 2). (This is not sufficient to cause cortical reorganisation - Recanzone et al( 1992) \nshowed that attention to the stimulation is vital. We consider only attended stimulation in \nthis model). To account for such data it is clearly necessary to analyse non-uniform \nstimulus probabilities, which demands extending the results of Takeuchi and Amari. Un(cid:173)\nfortunately, it seems to be hard to obtain general results. However, a perturbation analy(cid:173)\nsis around the uniform probability solution (5) is possible. \n\nTo proceed in this way, we must be able to assume that the change in the stimulus prob(cid:173)\nability density function away from uniformity is small. This reasoning is expressed by the \nfollowing equation: \n\np(y) = Po + E p(y) \n\n(7) \n\nwhere pry) is the new stimulus probability in terms of the uniform one and a perturbation \ndue to training: E is a small constant. The effect of the perturbation is to ease the weight \ndynamics (2) away from the solution (5) to a new steady-state. Our goal is to discover the \neffect of this on the RF border functions, and hence for RF size and MF. \n\np(y) \n\no \n\nchange \n\nFigure 2: The type \nof \nin \nstimulus probabil(cid:173)\nity density that we \nassume \nto model \nthe effects of be(cid:173)\nhavioural training. \n\ny \n\n\f86 \n\nR. S. PETERSEN, J. G. TAYLOR \n\n3.2 PERTURBATION ANALYSIS \n\n3.2.1 General Case \n\nFor a small enough perturbation, the effect on the RF borders and on the activity blob size \nought also to be small. We consider effects to first order in E, seeking new solutions of \nthe form: \n\ni = 1,2 \n\n,{x} = r; {x} - ~ {x} \nm{x} = +(~ (X}+'2 (x}) \n\n(8) \n\nwhere the superscript peT denotes the new, perturbed equilibrium and uni denotes the un(cid:173)\nperturbed, uniform probability equilibrium. Using (1) and (2) in (3) for the post-training \nRF borders, expanding to first order in E, a pair of difference equations may be obtained \nfor the changes in RF borders. It is convenient to define the following terms: \n\nro \n\nAt (x) = J p(y+ px)k(y)dy-b' a~ J p(y)dy \n\nrt '(x) \n\no \no \n\nr,\"no (x) \n\nr;-n' (x ) \n\nA2 {x} = J p(y + px + TO )k(y)dy - b' a~ J p(y)dy \nk(y) = b J a(y - y' )a(y' )dy' \nB = b' a~p() -k(ro)po > 0 \nC= w(p-tTo)p-t <0 \n\n(9) \n\nwhere the signs of Band C arise due to stability conditions (Amari, 1977; Takeuchi and \nAmari, 1979). In terms of RF size and RF position (4), the general result is: \n\nB~2 ,(X} = ~(~ + I)At (x) - M2 (x) \nBC~2m{X) = (B- C -+ C~)(~+ I}At (x) + (C- B++(C -2B)~)A2 (x) \n\nwhere ~ is the difference operator: \n~ f{ x) = f( x + p - t To) - f( x) \n\n3.2.2 Particular Case \n\n(10) \n\n(11 ) \n\nThe second order difference equations (l0) are rather opaque. This is partly due to cou(cid:173)\npling in y caused by the auto-correlation function key): (10) simplifies considerably if very \nnarrow stimuli are assumed - a(y)=O(y) (see also Amari, 1980). For periodic boundary \nconditions: \n\n(12) \n\nwhere: \n\n\fReorganization of Somatosensory Cortex after Tactile Training \n\n87 \n\nm -I P(W (y) = m -I pre (y) + Em -I (y) \n\n= p-l(y_+ro)+Em-l(y) \n\nand we have used the crude approximation: \n\ndx m x \"\" t;: ~m x - 2\" P ro \nd _() \n\n1 \n\n( \n\n1 \n\n_I \n\n) \n\n(13) \n\n(14) \n\nwhich demands smoothness on the scale of 10 . However, for perturbations like that \nsketched in figure 2, this is sufficient to tell us about the constant regions of MF. (We \nwould not expect to be able to model the data in the transition region in any case, as its \nform is too dependent upon fine detail of the model). \n\nOur results (12) show that the change in RF size of a neuron is simply minus the total \nchange in stimulus probability over its RF. Hence RF size decreases where p(y) increases \nand vice versa. Conversely, the change in MF at a given stimulus location is roughly the \nlocal average change in stimulus probability there. Note that changes in RF size correlate \ninversely with changes in MF. Figure 3 is a sketch of these results for the perturbation of \nfigure 2. \n\nMF \n\no \n\nRF \n\no \n\ny \n\n\\ \n\nI \n\nI \n\nL.J \n\nFigure 3: Results of perturbation analysis for how behavioural training (figure 2) changes \nRF size and MF respectively, in the case where stimulus width can be neglected. For MF \n- due to the approximation (14) - predictions do not apply near the transitions. \n\n4. DISCUSSION \nEquations (12) are the results of our model for RF size and MF after area 3b has fully \nadapted to the behavioural task, in the case where stimulus width can be neglected. They \nappear to be fully consistent with the data of Jenkins et al described above: RF size de(cid:173)\ncreases in the region of cortex selective for the stimulated body part and the MF for this \nbody part increases. Our analysis also makes a specific prediction that goes beyond \nJenkins et aI's data, directly due to the inverse relationship between changes in RF size \nand those in MF. Within the regions that surrender territory to the entrained finger tips \n(sometimes the face region), for which MF decreases, RF sizes should increase. \n\nSurprisingly perhaps, these changes in RF size are not due to adaptation of the afferent \nweights s(x,y). The changes are rather due to the adaptive threshold term six). This \npoint will be discussed more fully elsewhere. \n\nA limitation of our analysis is the assumption that the change in stimulus probability is in \nsome sense small. Such an approximation may be reasonable for behavioural training but \nseems less so as regards important experimental protocols like amputation or denervation. \nEvidently a more general analysis would be highly desirable. \n\n\f88 \n\nR. S. PETERSEN,J. O. TAYLOR \n\n5. CONCLUSION \nWe have analysed a system with three interacting features: lateral inhibitory interactions; \nHebbian adaptivity of afferent synapses and an adaptive firing threshold. Our results in(cid:173)\ndicate that such a system can account for the data of Jenkins et aI, concerning the re(cid:173)\nsponse of adult somatosensory cortex to the changing environmental demands imposed by \ntactile training. The analysis also brings out a prediction of the model, that may be test(cid:173)\nable. \n\nAcknowledgements \n\nRSP is very grateful for a travel stipend from the NIPS Foundation and for a Nick \nHughes bursary from the School of Physical Sciences and Engineering, King's College \nLondon, that enabled him to participate in the conference. \n\nReferences \nAmari S. (1977) BioI. Cybern. 2777-87 \nAmari S. (1980) Bull. Math. Biology 42339-364 \nGeman S. (1979) SIAM 1. App. Math. 36 86-105 \nGrajski K.A., Merzenich M.M. (1990) in Neural Information Processing Systems 2 \nTouretzky D.S. (Ed) 52-59 \nHaberW.B. (1955)1. Psychol. 40115-123 \nJenkins W.M., Merzenich M.M., Ochs M.T., Allard T., Gufc-Robles E. (1990) 1. Neuro(cid:173)\nphysiol. 63 82-104 \nKaas J.H. (1995) in The Cognitive Neurosciences Gazzaniga M.S. (Ed ic) 51-71 \nMerzenich M.M., Kaas J.H., Wall J.T., Nelson R.J., Sur M., Felleman DJ. (1983a) Neu(cid:173)\nroscience 8 35-55 \nMerzenich M .M., Kaas J.H., Wall J.T., Sur M., Nelson R.I., Felleman DJ. (1983b) Neu(cid:173)\nroscience 10639-665 \nMerzenich M.M., Nelson R.I., Stryker M.P., Cynader M.S., Schoppmann A., Zook J.M. \n(1984) 1. Compo Neural. 224591-605 \nMogilner A., Grossman A.T., Ribrary V., Joliot M., Vol mann J., Rapaport D., Beasley R., \nL1inas R. (1993) Proc. Natl. Acad. Sci. USA 90 3593-3597 \nPearson J.e., Finkel L.H., Edelman G.M. (1987) 1. Neurosci. 124209-4223 \nRecanzone G.H., Merzenich M.M., Jenkins W.M., Grajski K.A., Dinse H.R. (1992) 1. \nNeurophysiol. 67 1031-1056 \nRitter H., Schulten K. (1986) BioI. Cybern. 5499-106 \nTakeuchi A., Amari S. (1979) BioI. Cybern. 35 63-72 \nWillshaw DJ., von der Malsburg e. (1976) Proc. R. Soc. Lond. B194 203-243 \n\n\f", "award": [], "sourceid": 1053, "authors": [{"given_name": "Rasmus", "family_name": "Petersen", "institution": null}, {"given_name": "John", "family_name": "Taylor", "institution": null}]}