{"title": "Modelling motion primitives and their timing in biologically executed movements", "book": "Advances in Neural Information Processing Systems", "page_first": 1609, "page_last": 1616, "abstract": "Biological movement is built up of sub-blocks or motion primitives. Such primitives provide a compact representation of movement which is also desirable in robotic control applications. We analyse handwriting data to gain a better understanding of use of primitives and their timings in biological movements. Inference of the shape and the timing of primitives can be done using a factorial HMM based model, allowing the handwriting to be represented in primitive timing space. This representation provides a distribution of spikes corresponding to the primitive activations, which can also be modelled using HMM architectures. We show how the coupling of the low level primitive model, and the higher level timing model during inference can produce good reconstructions of handwriting, with shared primitives for all characters modelled. This coupled model also captures the variance profile of the dataset which is accounted for by spike timing jitter. The timing code provides a compact representation of the movement while generating a movement without an explicit timing model produces a scribbling style of output.", "full_text": "Modelling motion primitives and their timing\n\nin biologically executed movements\n\nBen H Williams\nSchool of Informatics\n\nUniversity of Edinburgh\n\n5 Forrest Hill, EH1 2QL, UK\n\nben.williams@ed.ac.uk\n\nMarc Toussaint\n\nTU Berlin\n\nFranklinstr. 28/29, FR 6-9\n\n10587 Berlin, Germany\n\nmtoussai@cs.tu-berlin.de\n\nAmos J Storkey\n\nSchool of Informatics\n\nUniversity of Edinburgh\n\n5 Forrest Hill, EH1 2QL, UK\n\na.storkey@ed.ac.uk\n\nAbstract\n\nBiological movement is built up of sub-blocks or motion primitives. Such\nprimitives provide a compact representation of movement which is also\ndesirable in robotic control applications. We analyse handwriting data to\ngain a better understanding of primitives and their timings in biological\nmovements.\nInference of the shape and the timing of primitives can be\ndone using a factorial HMM based model, allowing the handwriting to\nbe represented in primitive timing space. This representation provides a\ndistribution of spikes corresponding to the primitive activations, which can\nalso be modelled using HMM architectures. We show how the coupling of\nthe low level primitive model, and the higher level timing model during\ninference can produce good reconstructions of handwriting, with shared\nprimitives for all characters modelled. This coupled model also captures\nthe variance pro\ufb01le of the dataset which is accounted for by spike timing\njitter. The timing code provides a compact representation of the movement\nwhile generating a movement without an explicit timing model produces a\nscribbling style of output.\n\n1 Introduction\n\nMovement planning and control is a very di\ufb03cult problem in real-world applications. Cur-\nrent robots have very good sensors and actuators, allowing accurate movement execution,\nhowever the ability to organise complex sequences of movement is still far superior in bi-\nological organisms, despite being encumbered with noisy sensory feedback, and requiring\ncontrol of many non-linear and variable muscles. The underlying question is that of the\nrepresentation used to generate biological movement. There is much evidence to suggest\nthat biological movement generation is based upon motor primitives, with discrete muscle\nsynergies found in frog spines, (Bizzi et al., 1995; d\u2019Avella & Bizzi, 2005; d\u2019Avella et al.,\n2003; Bizzi et al., 2002), evidence of primitives being locally \ufb01xed (Kargo & Giszter, 2000),\nand modularity in human motor learning and adaption (Wolpert et al., 2001; Wolpert &\nKawato, 1998). Compact forms of representation for any biologically produced data should\ntherefore also be based upon primitive sub-blocks.\n\n1\n\n\f(A)\n\n(B)\n\nFigure 1: (A) A factorial HMM of a handwriting trajectory Yt. The parameters \u00af\u03bbm\nindicate\nthe probability of triggering a primitive in the mth factor at time t and are learnt for one speci\ufb01c\ncharacter. (B) A hierarchical generative model of handwriting where the random variable c indicates\nthe currently written character and de\ufb01nes a distribution over random variables \u03bbm\nt via a Markov\nmodel over Gm.\n\nt\n\nThere are several approaches to use this idea of motion primitives for more e\ufb03cient robotic\nmovement control. (Ijspeert et al., 2003; Schaal et al., 2004) use non-linear attractor dy-\nnamics as a motion primitive and train them to generate motion that solves a speci\ufb01c task.\n(Amit & Matari\u00b4c, 2002) use a single attractor system and generate non-linear motion by\nmodulating the attractor point. These approaches de\ufb01ne a primitive as a segment of move-\nment rather than understanding movement as a superposition of concurrent primitives. The\ngoal of analysing and better understanding biological data is to extract a generative model of\ncomplex movement based on concurrent primitives which may serve as an e\ufb03cient represen-\ntation for robotic movement control. This is in contrast to previous studies of handwriting\nwhich usually focus on the problem of character classi\ufb01cation rather than generation (Singer\n& Tishby, 1994; Hinton & Nair, 2005).\n\nWe investigate handwriting data and analyse whether it can be modelled as a superposition\nof sparsely activated motion primitives. The approach we take can intuitively be compared\nto a Piano Model (also called Piano roll model (Cemgil et al., 2006)). Just as piano music\ncan (approximately) be modelled as a superposition of the sounds emitted by each key we\nfollow the idea that biological movement is a superposition of pre-learnt motion primitives.\nThis implies that the whole movement can be compactly represented by the timing of each\nprimitive in analogy to a score of music. We formulate a probabilistic generative model that\nre\ufb02ects these assumptions. On the lower level a factorial Hidden Markov Model (fHMM,\nGhahramani & Jordan, 1997) is used to model the output as a combination of signals emitted\nfrom independent primitives (each primitives corresponds to a factor in the fHMM). On the\nhigher level we formulate a model for the primitive timing dependent upon character class.\nThe same motion primitives are shared across characters, only their timings di\ufb00er. We train\nthis model on handwriting data using an EM-algorithm and thereby infer the primitives and\nthe primitive timings inherent in this data. We \ufb01nd that the inferred timing posterior for a\nspeci\ufb01c character is indeed a compact representation for the speci\ufb01c character which allows\nfor a good reproduction of this character using the learnt primitives. Further, using the\ntiming model learnt on the higher level we can generate new movement \u2013 new samples of\ncharacters (in the same writing style as the data), and also scribblings that exhibit local\nsimilarity to written characters when the higher level timing control is omitted.\n\nSection 2 will introduce the probabilistic generative model we propose. Section 3 brie\ufb02y\ndescribes the learning procedures which are variants of the EM-algorithm adapted to our\nmodel. Finally in section 4 we present results on handwriting data recorded with a digi-\ntisation tablet, show the primitives and timing code we extract, and demonstrate how the\nlearnt model can be used to generate new samples of characters.\n\n2\n\n\f2 Model\n\nOur analysis of primitives and primitive timings in handwriting is based on formulating a\ncorresponding probabilistic generative model. This model can be described on two levels.\nOn the lower level (Figure 1(A)) we consider a factorial Hidden Markov Model (fHMM)\nwhere each factor produces the signal of a single primitive and the linear combination of\nfactors generates the observed movement Yt. This level is introduced in the next section\nand was already considered in (Williams et al., 2006; Williams et al., 2007). It allows the\nlearning and identi\ufb01cation of primitives in the data but does not include a model of their\ntiming. In this paper we introduce the full generative model (Figure 1(B)) which includes\na generative model for the primitive timing conditioned on the current character.\n\n2.1 Modelling primitives in data\n\nLet M be the number of primitives we allow for. We describe a primitive as a strongly\nconstrained Markov process which remains in a zero state most of the time but with some\nprobability \u00af\u03bb \u2208 [0, 1] enters the 1 state and then rigorously runs through all states 2, .., K\nbefore it enters the zero state again. While running though its states, this process emits a\n\ufb01xed temporal signal. More rigorously, we have a fHMM composed of M factors. The state\nof the mth factor at time t is Sm\n\nt \u2208 {0, .., Km}, and the transition probabilities are\n\nP (Sm\n\nt = b | Sm\n\nt\u22121 = a, \u00af\u03bbm\n\n\u00af\u03bbm\n\nt\n\n1 \u2212 \u00af\u03bbm\n\nt\n\n1\n0\n\nfor a = 0 and b = 1\nfor a = 0 and b = 0\nfor a 6= 0 and b = (a + 1) mod Km\notherwise\n\n.\n\n(1)\n\nt ) = \uf8f1\uf8f4\uf8f2\n\uf8f4\uf8f3\n\nThis process is parameterised by the onset probability \u00af\u03bbm\nt of the mth primitive at time t.\nThe M factors emit signals which are combined to produce the observed motion trajectory\nYt according to\n\nP (Yt | S 1:M\n\nt\n\n) = N (Yt,\n\nM\n\nXm=1\n\nW m\nSm\n\nt\n\n, C) ,\n\n(2)\n\nwhere N (x, a, A) is the Gaussian density function over x with mean a and covariance matrix\nA. This emission is parameterised by W m\n0 = 0 (the zero state\ndoes not contribute to the observed signal), and C is a stationary output covariance.\n\ns which is constrained to W m\n\n1:Km = (W m\n\nThe vector W m\nKm ) is what we call a primitive and \u2013 to stay in the analogy\n\u2013 can be compared to the sound of a piano key. The parameters \u00af\u03bbm\nt \u2208 [0, 1] could be\ncompared to the score of the music. We will describe below how we learn the primitives\nW m\n\ns and also adapt the primitive lengths Km using an EM-algorithm.\n\n1 , .., W m\n\n2.2 A timing model\n\nConsidering the \u00af\u03bb\u2019s to be \ufb01xed parameters is not a suitable model of biological movement.\nThe usage and timing of primitives depends on the character that is written and the timing\nvaries from character to character. Also, the \u00af\u03bb\u2019s actually provide a rather high-dimensional\nrepresentation for the movement. Our model takes a di\ufb00erent approach to parameterise\nthe primitive activations. For instance, if a primitive is activated twice in the course of the\nmovement we assume that there have been two signals (\u201cspikes\u201d) emitted from a higher\nlevel process which encode the activation times. More formally, let c be a discrete random\nvariable indicating the character to be written, see Figure 1(B). We assume that for each\nprimitive we have another Markovian process which generates a length-L sequence of states\nGm\n\nl \u2208 {1, .., R, 0},\n\nP (Gm\n\n1:L | c) = P (Gm\n\n1 | c)\n\nL\n\nYl=2\n\nP (Gm\n\nl\n\n| Gm\n\nl\u22121, c) .\n\n(3)\n\nThe states Gm\nFigure 2(b). We now de\ufb01ne \u03bbm\n\nencode which primitives are activated and how they are timed, as seen in\nt to be a binary random variable that indicate the activation\n\nl\n\n3\n\n\fr\ne\nb\nm\nu\nn\n\n \n\nl\n\ne\np\nm\na\ns\n \n\ni\n\ng\nn\nn\na\nr\nT\n\ni\n\n350\n\n300\n\n250\n\n200\n\n150\n\n100\n\n50\n\n\u21901\n\n0\n0 \n\n0.1\n\n0.2 \n\n(a)\n\n0.6\n\n0.7\n\n\u21903\n0.8\n\n0.3\n\n\u21902\n0.4\n\n0.5\nTime /ms\n(b)\n\nFigure 2: (a) Illustration of equation (4): The Markov process on the states Gm\nl emits Gaussian\ncomponents to the onset probabilities P (\u03bbm\nt = 1). (b) Scatter plot of the MAP onsets of a single\nprimitive for di\ufb00erent samples of the same character \u2018p\u2019. Gaussian components can be \ufb01t to each\ncluster.\n\nof a primitive at time t, which we call a \u201cspike\u201d. For a zero-state Gm\nemitted and thus the probability of \u03bbm = 1 is not increased. A non-zero state Gm\na Gaussian component to the probabilities of \u03bbm\n\u00b5m\n\nl = 0 no spike is\nl = r adds\nt = 1 centred around a typical spike time\n\nr and with variance \u03c3m\nr ,\n\nP (\u03bbm\n\nt = 1 | Gm\n\n1:Km , c) =\n\nL\n\nXl=1\n\n\u03b4Gm\n\nl\n\n>0Z t+0.5\n\nt\u22120.5\n\nN (t, \u00b5m\nGm\n\nl\n\n, \u03c3m\nGm\n\nl\n\n) dt .\n\n(4)\n\nl\n\n>0 is zero for Gm\n\nHere, \u03b4Gm\nl = 0 and 1 otherwise, and the integral essentially discretises the\nGaussian density. Additionally, we restrict the Markovian process such that each Gaussian\ncomponent can emit at most one spike, i.e., we constrain P (Gm\nl\u22121, c) to be a lower\ntriangular matrix. Given the \u03bb\u2019s, the state transitions in the fHMM factors are as in equation\n(1), replacing \u00af\u03bb by \u03bb.\nTo summarise, the spike probabilities of \u03bbm\nt = 1 are a sum of at most L Gaussian components\ncentred around the means \u00b5m\nl . Whether or not such a Gaussian\ncomponent is present is itself randomised and depends on the states Gm\nl . We can observe at\nmost L spikes in one primitive, the spike times between di\ufb00erent primitives are dependent,\nbut we have a Markovian dependency between the presence and timing of spikes within a\nprimitive. The whole process is parameterised by the initial state distribution P (Gm\n1 | c),\nthe transition probabilities P (Gm\nr and the variances \u03c3m\nr . All\nthese parameters will be learnt using an EM-algorithm.\n\nl\u22121, c), the spike means \u00b5m\n\nl and with variances \u03c3m\n\n| Gm\n\nl\n\n| Gm\n\nl\n\nThis timing model is motivated from results with the fHMM-only model: When training\nthe fHMM on data of a single character and then computing the MAP spike times using\na Viterbi alignment for each data sample we \ufb01nd that the MAP spike times are roughly\nGaussian distributed around a number of means (see Figure 2(b)). This is why we used a\nsum of Gaussian components to de\ufb01ne the onset probabilities P (\u03bb = 1). However, the data\nis more complicated than provided for by a simple Mixture of Gaussians. Not every sample\nincludes an activation for each cluster (which is a source of variation in the handwriting)\nand there cannot be more than one spike in each cluster. Therefore we introduced the\nconstrained Markov process on the states Gm\nl which may skip the emission of some spikes.\n\n3 Inference and learning\n\nIn the experiments we will compare both the fHMM without the timing model (Figure 1(A))\nand the full model including the timing model (Figure 1(B)).\n\nIn the fHMM-only model, inference in the fHMM is done using variational inference as\ndescribed in (Ghahramani & Jordan, 1997). Using a standard EM-algorithm we can train\nthe parameters W , C and \u00af\u03bb. To prevent over\ufb01tting we assume the spike probabilities\n\n4\n\n\fr\ne\nb\nm\nu\nn\n \ne\nv\ni\nt\ni\n\nm\n\ni\nr\n\nP\n\n10\n\n9\n\n8\n\n7\n\n6\n\n5\n\n4\n\n3\n\n2\n\n1\n\n2\n\n0\n\n\u22122\n\n\u22124\n\n\u22126\n\n\u22128\n\nm\nm\n\n/\n \ne\nc\nn\na\nt\ns\nD\n\ni\n\n\u221210\n\n\u221212\n\n\u221214\n\n\u21906\n\n\u21903\n\u21909\n\n\u21906\u21907\n\n\u21908\n\u21901\n\n\u21902\n\u21904\n\n\u219010\n\u21909\n\n\u21902\n\n\u21908\n\n\u21905\n\nm\nm\n\n/\n \n\ne\nc\nn\na\nt\ns\nD\n\ni\n\n0.25 0.5 0.75 1\n\nTime /s\n\n(a)\n\n\u22124 \u22122\n\n2\nDistance /mm\n\n0\n\n4\n\n0\n\u22125\n\u221210\n0\n\u22125\n\u221210\n\n\u22124.5 \u22121\n\n0.5\n\n0\n\n \u22120.5\n\n 0\n\u22120.25\n \u22120.5\n\u22120.75\n\n0 0.20.4\n\n \u22120.5\n\n0\n\n 0\n\u22120.25\n \u22120.5\n\n 5\n\n2.5\n\n 0\n\n\u22120.25 0\n\n0 1 2 3\n\n 0.5\n\n 0.25\n\n 0\n\n\u22120.25\n\n 0.25\n 0\n\u22120.25\n \u22120.5\n\n0 0.2 0.4\n\n\u22120.25\n\n \u22120.5\n\n 0\n\n\u22120.25\n\n \u22120.5\n\n\u22120.75\n\n0 0.20.4\n\n\u22120.2 0\n\n\u22120.2 0\n\nDistance /mm\n(b)\n\n 0\n\u22120.1\n\u22120.2\n\u22120.3\n\n\u22120.2\u22120.1 0\n\nm\nm\n\n/\n \n\ne\nc\nn\na\n\nt\ns\nD\n\ni\n\n 0\n\n \u22122.5\n\n \u22125\n\n \u22127.5\n\n \u221210\n\n\u221212.5\n\n \u221215\n\n\u221217.5\n\n\u21907\n\u21902\n\u21905\n\n\u21901\n\u21908\n\n\u21909\n\u21901\u21902\n\u21908\n\n\u21907\n\u21905\n\u21904\n\u21905\n\n\u21903\n\u21904\n\n\u21902\n\n\u21907\n\u21901\n\n\u21907\n\n\u21903\n\u21908\n\u21901\u21902\n\u21903\n\n\u21907\n\u21901\n\u21905\n\n\u21908\n\n\u21904\n\u21903\n\n\u21904\n\u21903\n\n\u21903\n\n\u21907\n\n\u21904\n\n\u21907\n\n\u21905\n\n\u21907\n\u21904\n\u21904\n\n\u21908\n\u21901\n\n\u21903\n\u21904\n\u21902\n\u21908\u21909\n\u21904\n\u21903\n\n\u21907\n\n\u21908\n\n\u21905\n\n\u21905\n\n\u21904\n\n\u21908\n\u21903\n\n\u21908\n\u21903\n\n \u22125\n\n\u21907\n\u22122.5\n\n 5\n\n 0\n\n 2.5\nDistance /mm\n(c)\n\nFigure 3: (a) Reconstruction of a character from a training dataset, using a subset of the primitives.\nThe thickness of the reconstruction represents the pressure of the pen tip, and the di\ufb00erent colours\nrepresent the activity of the di\ufb00erent primitives, the onsets of which are labelled with an arrow.\nThe posterior probability of primitive onset is shown on the left, highlighting why a spike timing\nrepresentation is appropriate. (b) Plots of the 10 extracted primitives, as drawn on paper. (c)\nGenerative samples using a \ufb02at primitive onset prior, showing scribbling behaviour of uncoupled\nmodel.\n\nare stationary (\u03bbm\nprimitive.\n\nt constant over t) and learn only a single mean parameter \u00af\u03bbm for each\n\nIn the full model, inference is an iterative process of inference in the timing model and\ninference in the fHMM. Note that variational inference in the fHMM is itself an iterative\nprocess which recomputes the posteriors over Sm\nt after adapting the variational parameters.\nWe couple this iteration to inference in the timing model in both directions: In each iteration,\nt de\ufb01nes observation likelihoods for inference in the Markov models Gm\nthe posterior over Sm\nl .\nInversely, the resulting posterior over Gm\nl to\n\u03bbm\nt ) which enter the fHMM inference in the next iteration. Standard M-steps are then used\nto train all parameters of the fHMM and the timing model. In addition, we use heuristics to\nadapt the length Km of each primitive: we increase or decrease Km depending on whether\nthe learnt primitive is signi\ufb01cantly di\ufb00erent to zero in the last time steps. The number of\nparameters used in the model therefore varies during learning, as the size of W depends\nupon Km, and the size of G depends upon the number of inferred spikes.\n\nl de\ufb01nes a new prior over \u03bb\u2019s (a message from Gm\n\nIn the experiments we will also investigate the reconstruction of data. By this we mean\nthat we take a trained model, use inference to compute the MAP spikes \u03bb for a speci\ufb01c\ndata sample, then we use these \u03bb\u2019s and the de\ufb01nition of our generative model (including the\nlearnt primitives W ) to generate a trajectory which can be compared to the original data\nsample. Such a reconstruction can be computed using both the fHMM-only model and the\nfull model.\n\n4 Results\n\n4.1 Primitive and timing analysis using the fHMM-only\n\nWe \ufb01rst consider a data set of 300 handwritten \u2018p\u2019s recorded using an INTUOS 3 WA-\nCOM digitisation tablet http://www.wacom.com/productinfo/9x12.cfm, providing trajec-\ntory data at 200Hz. The trajectory Yt we model is the normalised \ufb01rst di\ufb00erential of the\ndata, so that the data mean was close to zero, providing the requirements for the zero\nstate assumption in the model constraints. Three dimensional data was used, x-position,\ny-position, and pressure. The data collected were separated into samples, or characters,\nallowing each sample to be separately normalised.\n\nOur choice of parameter was M = 10 primitives and we initialised all Km = 20 and con-\nstrained them to be smaller than 100 throughout learning.\n\nWe trained the fHMM-only model on this dataset. Figure 3(a) shows the reconstruction of a\nspeci\ufb01c sample of this data set and the corresponding posterior over \u03bb\u2019s. This clean posterior\nis the motivation for introducing a model of the spike timings as a compact representation\n\n5\n\n\fm\nm\n\n/\n \ne\nc\nn\na\nt\ns\nD\n\ni\n\n 0\n\n \u22125\n\n\u221210\n\n\u221215\n\n\u221220\n\n\u221225\n\n\u221230\n\n\u221235\n\n\u221240\n\n\u221245\n\n\u221250\n\n\u21905\n\u21903\n\n\u21904\n\n\u21904\n\u21909\n\u21903\n\n\u21904\n\n\u219010\n\n\u21907\n\u219010\n\n\u21903\n\u21901\u21902\n\u21907\n\u21904\n\u21908\n\u21902\n\n\u21904\n\u21907\n\u21907\n\u219010\u21901\n\u21904\n\u21901\u21902\u21903\n\u21908\n\u21909 \u21909\n\u21903\n\u21901\n\u219010\n\u21907\n\u21904\n\u21901\n\u21903\n\u21907\n\u21903\n\u21904\n\u21904\n\u21906\n\u21908\n\u21901\u21902\n\u21904\n\u21904\n\u21906\n\u219010\n\u21908\n\u21906\n\u21906\n\u21907\u21908\n\u219010\n\u21908\n\u21904\n\u21903\n\u21905\n\u21902\n\u21907\u21907\n\u21909\n\u21903\n\u21905\n\u21906\n\u219010\n\u219010\n\u21909\n\u219010\n\u21909\n\u21902\n\u219010\n\u21904\n\u21907 \u21907\n\u21906 \u21906\n\u21903\n\u21906\n\u21903\n\u21903\n\u219010\n\u219010\n\u21906\n\u21904\n\u21902\n\u219010\n\u21906\n\u21905\n\u21902\n\u21907\n\u21905\n\u21905\n\u21907\n\u21903\n\u21904\n\u21904\n\u219010\n\u219010\n\u21909\n\u219010\n\u21909\n\u21904\n\u21903\n\u21908\n\u21903\n\u21901\u21902\n\u21901\n\u21907\n\u21903\n\u21909\u219010\u219010\n\u21904\n\u21901\u21902\n\u21906\n\u21904\n\u21908\n\u21908\u21908\n\u21904\n\u21907\n\u21907\n\u21908\n\u21903\n\u219010\n\u21902\n\u21906\n\u21905\n\u21909\n\u21904\n\u21907\n\u219010\u219010\n\u21907\n\u21906\n\u219010\n\u21905\n\u21905\n\u21904\n\u21903\n\u21907\n\u21903\n\u21906\n\u21907\n\u21908\n\u219010\n\u21903\n\u21906\n\u21904\n\n\u21904\n\u21909\n\u219010\n\u21906\n\u21903\n\u21904\n\u21908\n\u21901\u21902\u21903\n\u219010\n\u21908\n\u21901\u21902\n\u21906\n\u219010\n\u21903\n\u21901\u21902\u21902\u21903\n\u21904\n\u21901\u21902\n\u21904\n\u219010\n\u21904\n\u21909\n\u21903\n\u21904\n\u21906\n\u21904\n\u21904\n\u21903\n\u21908\n\u21909\n\u21907\n\u219010\n\u21908\n\u21909\n\u21905\u21907 \u21907\n\u219010\n\u21905\n\u21905\n\u21908\n\u219010\n\u21907 \u21907\n\u21908\n\u219010\n\u21903\n\u21909\n\u21906 \u21906\n\u21905\n\u219010 \u219010\n\u21903\n\u21904\n\u21906\n\u21902\n\u21907\n\u219010\n\u21906\n\u21904\n\u21907\n\u21903\n\u21909\n\u21901\u21902\n\u21904\n\u21909\n\u21901\u21902\n\u21903\n\u21908\n\u21908 \u21909\n\u21904\n\u219010\n\u219010\n\u21901\u21902\u21902\n\u21903\n\u21906\n\u21903\n\u21907\n\u21907\n\u21904\n\u21908\n\u21907\n\u21902\n\u21906\n\u21903\n\u21904\n\u21902\n\u21907\n\u21903\n\u219010 \u219010\n\u219010\n\u21903\n\u21905\n\u21905\n\u21906\n\u21905\n\u21903\n\u219010\n\u21906\n\u21904\n\u21907\n\n\u21903\n\u219010\n\u21907\n\u21907\n\u21901\u21902\n\u21904\n\u219010\n\u21903\n\u21903\n\u21903\n\u21906\n\u21904\n\u21904\n\u21902\n\u219010\n\u219010\n\n\u21907 \u21907\n\u21903\n\u21904\n\u21908\n\u21908\n\u219010\n\u219010\n\u21901\n\u21901\u21902\n\u21909\n\u21903\n\u21909\n\u21902\n\u219010\n\u21902\n\u21904\n\u21903\n\u21904\n\u21906\n\u21908\n\u219010\n\u21902\n\u21905\n\n\u21905\n\u219010\n\u21904\n\u21903\n\u21906\n\u21907\n\n\u21905\n\u21906\n\u21907\n\u219010\n\u21903\n\n\u21904\n\u21909\n\u219010\n\n\u21904\n\n\u219010\n\n\u21906\n\n\u21903\n\u21904\n\u219010\n\u21908\n\n\u219010\n\n\u219010\n\u21907\n\u21904\n\u21908\n\u21909\n\u221220\n\n\u219010\n\u21906\n\u21909\n\u21903\n\u21904\n\u21907\n\u221210\n\n 0\n\n 10\n\nDistance /mm\n(a)\n\n\u219010\n\u21908\u21909\n\u21901\u21902\n\u21906\n\u21906\n\u21903\n\u21909\n\u21903\n\u219010\n\u21907\n\u21906\n\u21904\n\u219010\n\u21905\n\u21903\n\n\u21906\n\u21908\n\u21904\n\u21902\n\u21909\n\u21903\n\u21907\n\n\u219010\n\u21908\n\nx 104\n\n2\n\nl\n\ns\ne\np\nm\na\ns\n \nf\n\no\n\n \nr\ne\nb\nm\nu\nN\n\n1.8\n\n1.6\n\n1.4\n\n1.2\n\n1\n\n0.8\n\n0.6\n\n0.4\n\n0.2\n\n \n\nx position\ny position\npressure\n\nm\nm\n\n/\n \n\ne\nc\nn\na\n\nt\ns\nD\n\ni\n\n 0\n\n\u221210\n\n\u221220\n\n\u221230\n\n\u221240\n\n\u221250\n\n0\n\n \n\n\u2212800\n\n\u2212600\n\n\u2212200\n\n\u2212400\n400\nVelocity error /pixels /sec\n\n200\n\n0\n\n(b)\n\n600\n\n800\n\n\u221210\n\n 30\n\n 10\n\n 0\n 20\nDistance /mm\n(c)\n\nFigure 4: (a) Reconstructions of \u2018p\u2019s using the full model. (b) Histogram of the reconstruction\nerror, which is 3-dimensional pen movement velocity space. These errors were produced using over\n300 samples of a single character. (c) Generative samples using the full generative model (Figure\n1(B)).\n\nof the data. Equally the reconstruction (using the Viterbi aligned MAP spikes) shows the\nsu\ufb03ciency of the spike code to generate the character. Figure 3(b) shows the primitives W m\n(translated back into pen-space) that were learnt and implicitly used for the reconstruction\nof the \u2018p\u2019. These primitives can be seen to represent typical parts of the \u2018p\u2019 character; the\narrows in the reconstruction indicate when they are activated.\n\nThe fHMM-only model can be used to reconstruct a speci\ufb01c data sample using the MAP \u03bb\u2019s\nof that sample, but it can not \u2018autonomously\u2019 produce characters since it lacks a model of\nthe timing. To show the importance of this spike timing information, we can demonstrate\nthe e\ufb00ects of removing it. When using the fHMM-only model as a generative model with\nthe learnt stationary spike probabilities \u00af\u03bbm the result is a form of primitive babbling, as can\nbe seen in Figure 3(c). Since these scribblings are generated by random expression of the\nlearnt primitives they locally resemble parts of the \u2018p\u2019 character.\n\nThe primitives generalise to other characters if the training dataset contained su\ufb03cient\nvariation. Further investigation has shown that 20 primitives learnt from 12 character types\nare su\ufb03ciently generalised to represent all remaining novel character types without further\nlearning, by using a single E-step to \ufb01t the pre-learnt parameters to a novel dataset.\n\n4.2 Generating new characters using the full generative model\n\nNext we trained the full model on the same \u2018p\u2019-dataset. Figure 4(a) shows the reconstruc-\ntions of some samples of the data set. To the right we see the reconstruction errors in\nvelocity space showing at many time points a perfect reconstruction was attained. Since\nthe full model includes a timing model it can also be run autonomously as a generative\nmodel for new character samples. Figure 4(c) displays such new samples of the character\n\u2018p\u2019 generated by the learnt model.\n\nAs a more challenging problem we collected a data set of over 450 character samples of\nthe letters a, b and c. The full model includes the written character class as a random\nvariable and can thus be trained on multi-character data sets. Note that we restrict the\ntotal number of primitives to M = 10 which will require a sharing of primitives across\ncharacters. Figure 5(a) shows samples of the training data set while Figure 5(b) shows\nreconstructions of the same samples using the MAP \u03bb\u2019s in the full model. Generally, the\nreconstructions using the full model are better than using the fHMM-only model. This can\nbe understood investigating the distribution of the MAP \u03bb\u2019s across di\ufb00erent samples under\nthe fHMM-only and the full model, see Figure 6. Coupling the timing and the primitive\nmodel during learning has the e\ufb00ect of trying to learn primitives from data that are usually in\nthe same place. Thus, using the full model the inferred spikes are more compactly clustered\nat the Gaussian components due to the prior imposed from the timing model (the thick\nblack lines correspond to Equation (4)).\n\n6\n\n\f 0\n\n\u221210\n\n\u221220\n\n\u221230\n\n\u221240\n\n\u221250\n\n\u221260\n\n\u221270\n\n\u221280\n\n\u221290\n\nm\nm\n\n/\n \n\ne\nc\nn\na\n\nt\ns\nD\n\ni\n\n\u221210\n\n 0 10 20 30 40\nDistance /mm\n(a)\n\n 0\n\n \u221210\n\n \u221220\n\n \u221230\n\n \u221240\n\n \u221250\n\n \u221260\n\n \u221270\n\n \u221280\n\n \u221290\n\nm\nm\n\n/\n \n\ne\nc\nn\na\n\nt\ns\nD\n\ni\n\n\u2212100\n\n\u221210 0 10 20 30 40\nDistance /mm\n\n(b)\n\nm\nm\n\n/\n \n\ne\nc\nn\na\nt\ns\nD\n\ni\n\n0\n\n\u221220\n\n\u221240\n\n\u221260\n\n\u221280\n\n \u2212100\n\n\u221210 0 10 20 30 40\nDistance /mm\n(c)\n\nFigure 5: (a) Training dataset, showing 3 character types, and variation. (b) Reconstruction of\ndataset using 10 primitives learnt from the dataset in (a). (c) Generative samples using the full\ngenerative model (Figure 1(B)).\n\nm=5\n\nm=4\n\nm=3\n\nm=2\n\nx\ne\nd\nn\n\ni\n \n\nl\n\ne\np\nm\na\nS\n\u2212\ne\nv\ni\nt\ni\n\n \n\n \n\nm\n\ni\nr\n\nP\n\nm=1\n\n 0\n\n0.1\n\n0.2\n\nm=5\n\nm=4\n\nm=3\n\nm=2\n\nm=1\n\nx\ne\nd\nn\n\ni\n \n\nl\n\ne\np\nm\na\nS\n\u2212\n \ne\nv\ni\nt\ni\n\n \n\nm\n\ni\nr\n\nP\n\n0.6\n\n0.7\n\n0.8\n\n 0\n\n0.1\n\n0.2\n\n0.3\n\n0.4\n\n0.5\nTime /ms\n(a)\n\n0.3\n\n0.4\n\n0.5\nTime /ms\n(b)\n\n0.6\n\n0.7\n\n0.8\n\nFigure 6: (a) Scatter plot of primitive onset spikes for a single character type across all samples\nand primitives, showing the clustering of certain primitives in particular parts of a character. The\nhorizontal bars separate the results for di\ufb00erent primitives. (b) Scatter plot of spikes from same\ndataset, with a coupled model, showing suppression of outlying spikes and tightening of clusters.\nThe thick black lines displays the prior over \u03bb\u2019s imposed from the timing model via Equation (4).\n\nFinally, we run the full model autonomously to generate new character samples, see Figure\n5(c). Here the character class, c is \ufb01rst sampled uniform randomly and then all learnt\nparameters are used to eventually sample a trajectory Yt. The generative samples show\ninteresting variation while still being readably a character.\n\n5 Conclusions\n\nIn this paper we have shown that it is possible to represent handwriting using a primitive\nbased model. The model consists of a superposition of several arbitrary \ufb01xed functions.\nThese functions are time-extended, of variable length (during learning), and are superim-\nposed with learnt o\ufb00sets. The timing of activations is crucial to the accurate reproduction of\nthe character. With a small amount of timing variation, a distorted version of the original\ncharacter is reproduced, whilst large (and coordinated) di\ufb00erences in the timing pattern\nproduce di\ufb00erent character types.\n\nThe spike code provides a compact representation of movement, unlike that which has pre-\nviously been explored in the domain of robotic control. We have proposed to use Markov\nprocesses conditioned on the character as a model for these spike emissions. Besides con-\ntributing to a better understanding of biological movement, we hope that such models will\ninspire applications also in robotic control, e.g., for movement optimisation based on spike\ncodings.\n\n7\n\n\fAn assumption made in this work is that the primitives are learnt velocity pro\ufb01les. We have\nnot included any feedback control systems in the primitive production, however the presence\nof low-level feedback, such as in a spring system (Hinton & Nair, 2005) or dynamic motor\nprimitives (Ijspeert et al., 2003; Schaal et al., 2004), would be interesting to incorporate into\nthe model, and could perhaps be done by changing the outputs of the fHMM to parameterise\nthe spring systems rather than be Gaussian distributions of velocities.\n\nWe make no assumptions about how the primitives are learnt in biology.\nIt would be\ninteresting to study the evolution of the primitives during human learning of a new character\nset. As humans become more con\ufb01dent at writing a character, the reproduction becomes\nfaster, and more repeatable. This could be related to a more accurate and e\ufb03cient use\nof primitives already available. However, it might also be the case that new primitives\nare learnt, or old ones adapted. More research needs to be done to examine these various\npossibilities of how humans learn new motor skills.\n\nAcknowledgements\n\nMarc Toussaint was supported by the German Research Foundation (DFG), Emmy Noether fel-\nlowship TO 409/1-3.\n\nReferences\n\nAmit, R., & Matari\u00b4c, M. (2002). Parametric primitives for motor representation and control. Proc.\n\nof the Int. Conf. on Robotics and Automation (ICRA) (pp. 863\u2013868).\n\nBizzi, E., d\u2019Avella, A., Saltiel, P., & Trensch, M. (2002). Modular organization of spinal motor\n\nsystems. The Neuroscientist, 8, 437\u2013442.\n\nBizzi, E., Giszter, S., Loeb, E., Mussa-Ivaldi, F., & Saltiel, P. (1995). Modular organization of\n\nmotor behavior in the frog\u2019s spinal cord. Trends in Neurosciences, 18, 442\u2013446.\n\nCemgil, A., Kappen, B., & Barber, D. (2006). A generative model for music transcription. IEEE\n\nTransactions on Speech and Audio Processing, 14, 679\u2013694.\n\nd\u2019Avella, A., & Bizzi, E. (2005). Shared and speci\ufb01c muscle synergies in natural motor behaviors.\n\nPNAS, 102, 3076\u20133081.\n\nd\u2019Avella, A., Saltiel, P., & Bizzi, E. (2003). Combinations of muscle synergies in the construction\n\nof a natural motor behavior. Nature Neuroscience, 6, 300\u2013308.\n\nGhahramani, Z., & Jordan, M. (1997). Factorial hidden Markov models. Machine Learning, 29,\n\n245\u2013275.\n\nHinton, G. E., & Nair, V. (2005). Inferring motor programs from images of handwritten digits.\n\nAdvances in Neural Information Processing Systems 18 (NIPS 2005) (pp. 515\u2013522).\n\nIjspeert, A. J., Nakanishi, J., & Schaal, S. (2003). Learning attractor landscapes for learning motor\nprimitives. Advances in Neural Information Processing Systems 15 (NIPS 2003) (pp. 1523\u20131530).\nMIT Press, Cambridge.\n\nKargo, W., & Giszter, S. (2000). Rapid corrections of aimed movements by combination of force-\n\n\ufb01eld primitives. J. Neurosci., 20, 409\u2013426.\n\nSchaal, S., Peters, J., Nakanishi, J., & Ijspeert, A. (2004). Learning movement primitives.\n\nISRR2003.\n\nSinger, Y., & Tishby, N. (1994). Dynamical encoding of cursive handwriting. Biol.Cybern., 71,\n\n227\u2013237.\n\nWilliams, B., M.Toussaint, & Storkey, A. (2006). Extracting motion primitives from natural hand-\n\nwriting data. Int. Conf. on Arti\ufb01cial Neural Networks (ICANN) (pp. 634\u2013643).\n\nWilliams, B., M.Toussaint, & Storkey, A. (2007). A primitive based generative model to infer\ntiming information in unpartitioned handwriting data. Int. Jnt. Conf. on Arti\ufb01cial Intelligence\n(IJCAI) (pp. 1119\u20131124).\n\nWolpert, D. M., Ghahramani, Z., & Flanagan, J. R. (2001). Perspectives and problems in motor\n\nlearning. TRENDS in Cog. Sci., 5, 487\u2013494.\n\nWolpert, D. M., & Kawato, M. (1998). Multiple paired forward and inverse models for motor\n\ncontrol. Neural Networks, 11, 1317\u20131329.\n\n8\n\n\f", "award": [], "sourceid": 665, "authors": [{"given_name": "Ben", "family_name": "Williams", "institution": null}, {"given_name": "Marc", "family_name": "Toussaint", "institution": null}, {"given_name": "Amos", "family_name": "Storkey", "institution": null}]}