{"title": "Nonlinear Filtering of Electron Micrographs by Means of Support Vector Regression", "book": "Advances in Neural Information Processing Systems", "page_first": 717, "page_last": 724, "abstract": "", "full_text": "Nonlinear Filtering of Electron\n\nMicrographs by Means of Support Vector\n\nRegression\n\nR. Vollgraf1, M. Scholz1, I. A. Meinertzhagen2, K. Obermayer1\n\n1Department of Electrical Engineering and Computer Science\n\nBerlin University of Technology, Germany\n\nfvro,idefix,obyg@cs.tu-berlin.de\n2Dalhousie University, Halifax, Canada\n\niam@is.dal.ca\n\nAbstract\n\nNonlinear (cid:12)ltering can solve very complex problems, but typically\ninvolve very time consuming calculations. Here we show that for\n(cid:12)lters that are constructed as a RBF network with Gaussian basis\nfunctions, a decomposition into linear (cid:12)lters exists, which can be\ncomputed e(cid:14)ciently in the frequency domain, yielding dramatic\nimprovement in speed. We present an application of this idea to\nimage processing. In electron micrograph images of photoreceptor\nterminals of the fruit (cid:13)y, Drosophila, synaptic vesicles containing\nneurotransmitter should be detected and labeled automatically. We\nuse hand labels, provided by human experts, to learn a RBF (cid:12)lter\nusing Support Vector Regression with Gaussian kernels. We will\nshow that the resulting nonlinear (cid:12)lter solves the task to a degree of\naccuracy, which is close to what can be achieved by human experts.\nThis allows the very time consuming task of data evaluation to be\ndone e(cid:14)ciently.\n\n1\n\nIntroduction\n\nUsing (cid:12)lters for image processing can be understood as a supervised learning method\nfor classi(cid:12)cation and segmentation of certain image elements. A given training im-\nage would contain a target that should be approximated by some (cid:12)lter at every\nlocation. In principle, any kind of machine-learning techniques could be employed\nto learn the mapping from the input receptive (cid:12)eld of the (cid:12)lter to the target value.\nThe most simple (cid:12)lter is linear mapping. It has the advantage that it can be very\ne(cid:14)ciently computed in the frequency domain. However linear (cid:12)lters may not be\ncomplex enough for di(cid:14)cult problems. The complexity of nonlinear (cid:12)lters is in\nprinciple unlimited (if we leave generalization issues aside), but the computation\nof the (cid:12)lter output can be very time consuming, since usually there is no shortcut\nin the frequency domain, as for linear (cid:12)lters. However, for nonlinear (cid:12)lters, that\nare linear superpositions of Gaussian radial basis functions, there exists a decom-\nposition into linear (cid:12)lters, allowing the (cid:12)lter output to be computed in reasonable\n\n\ftime. This sort of nonlinear (cid:12)ltering is for example obtained, when Support Vector\nMachines (SVM) with a Gaussian kernel are used for learning. SVM have proved to\nyield good performance on many applications [1]. This and the ability to compute\nthe (cid:12)lter output in an a(cid:11)ordable time, make SVM interesting for nonlinear (cid:12)ltering\nin image processing tasks. Here we apply this new method to the evaluation of elec-\ntron micrograph images taken from the visual system of the fruit (cid:13)y, Drosophila, as\na means to analyze morphological phenotypes of new genetic mutants. Genetically\nmanipulable organisms such as Drosophila provide means to address many current\nquestions in neuroscience. The action, even of lethal genes, can be uncovered in\nphotoreceptors by creating homozygous whole-eye mosaics in heterozygous (cid:13)ies [2].\nMutant synaptic phenotypes are then interpretable from detailed ultra-structural\nknowledge of the photoreceptor terminals R1-R6 in the lamina [3]. Electron mi-\ncroscopy (EM) alone o(cid:11)ers the resolution required to analyze sub-cellular structure,\neven though this technique is tedious to undertake. In Drosophila genetics hundreds\nof mutants of the visual system have been isolated, many even from a single genetic\nscreen. The task of analyzing each of these mutants manually is simply not feasible,\nhence reliable automatic (computer assisted) methods are needed. The focus here\nis just to count the number of synaptic vesicles, but in general the method proposed\nin this report could be extended to the analysis of other structures as well.\n\nAs representative datasets showing the feasibility of the proposed method, we have\nchosen two datasets from wild type Drosophila (ter01 for training and ter04 for\nperformance evaluation, cf. Fig. 1) and one from a visual system mutant (mutant,\nalso for performance evaluation, cf. Fig. 2, left).\n\n2 Learning the RBF Filter\n\nGiven an image x, we want to (cid:12)nd a RBF (cid:12)lter with Gaussian basis functions, the\noutput of which is closest to a target image y, in terms of some suitable distance\nmeasure. The (cid:12)lter is constrained to some receptive (cid:12)eld P , so that its output at\nposition r would be formulated in the most general form as\n\nz(r) = fRBF (x(r)) = fRBF (cid:16)(x(r + (cid:1)r1); : : : ; x(r + (cid:1)rM ))T(cid:17) ;\n\nwhere P = f(cid:1)r1; : : : ; (cid:1)rM g is the neighborhood that forms the receptive (cid:12)eld.\nIn the following we will continue using bold faced symbols to indicate a vector\ncontaining the neighborhood (patch) at some location, while light faces indicate the\nvalue of the image itself. Individual elements of patches are addressed by a subscript,\nfor example x(cid:1)r(r) = x(r +(cid:1)r). fRBF is a RBF network with M input dimensions.\nIt can be implemented as a feed forward net with a single hidden layer containing a\n(cid:12)xed number of RBF units and a linear output layer [4]. However we would rather\nuse the technique of Support Vector Regression (SVR) [5] as it has a number of\nadvantages over RBF feed forward networks. It o(cid:11)ers adjustable model complexity\ndepending on the training data, thus providing good generalization performance.\nThe training of SVR is a quadratic, constrained optimization problem, which can\nbe solved e(cid:14)ciently without being trapped into local minima. In the linear case the\nformulation of the (cid:23)(cid:0)SVR, as it was introduced in [6], would be\n\nminimize\n\n(cid:28) (w; (cid:24)((cid:3)); \") =\n\n1\n2\n\nkwk2 + C (cid:1) (cid:23)\" +\n\n1\nl\n\n((cid:24)i + (cid:24)(cid:3)\n\ni )!\n\nl\n\nXi=1\n\ns.t.\n\n((w (cid:1) xi) + b) (cid:0) yi (cid:20) \" + (cid:24)i ;\n(cid:24)((cid:3))\ni (cid:21) 0; \" (cid:21) 0\n\nyi (cid:0) ((w (cid:1) xi) + b) (cid:20) \" + (cid:24)(cid:3)\ni\n\n(1)\n\n(2)\n\n(3)\n\n\fThe constraints implement as a distance measure the \"-insensitive loss jy (cid:0) f (x)j\" =\nmaxf0; jy (cid:0) f (x)j (cid:0) \"g, which is a basic feature of SVR, and has been shown to yield\nrobust estimation. The objective itself provides a solution of low complexity (small\nkwk2) and, at the same time, low errors, balanced by C. In contrast to \"(cid:0)SVR, as\nit was introduced at (cid:12)rst in [5], parameterization with the hyper parameter (cid:23) also\nallows optimization for the width \" of the insensitive region. Interacting with C, (cid:23)\ncontrols the complexity of the model. It provides an upper bound on the fraction\nof outliers (samples that do not fall into the epsilon tube) and a lower bound on\nthe fraction of support vectors (SV), see [6] and [1] for further details. As usual for\nSVM, the system is transformed into a nonlinear regressor by replacing the scalar\nproduct with a kernel, that ful(cid:12)lls Mercers condition [7]. With a Gaussian kernel\n(RBF kernel) the regression function is\n\nz(r) =\n\nl\n\nXi=1\n\n(cid:11)((cid:3))\ni zi(r) + b ;\n\nwhere\n\nzi(r) = k(xi; x(r)) = exp (cid:0)\n\n(xi;(cid:1)r (cid:0) x(r + (cid:1)r))2!\n\n1\n\n(cid:13) X(cid:1)r2P\n\n(4)\n\n(5)\n\nis the Gaussian- or RBF-kernel. The resulting SVs xi are a subset of the training\nexamples, for which one of the constraints (2) holds with equality. They correspond\nto Lagrange multipliers having (cid:11)((cid:3))\nIn the analogy to a RBF\nnetwork, the SVs are the centers of the basis functions, while (cid:11)((cid:3))\nare the weights\nof the output layer.\n\ni = ((cid:11)i (cid:0) (cid:11)(cid:3)\n\ni ) 6= 0.\n\ni\n\n3 RBF Filtering\n\nTo evaluate a RBF network (cid:12)lter at location r, all the basis functions have to\nbe evaluated for the neighborhood x(r). This calculation is computationally very\nexpensive when computed in the straightforward way given by (5). If the squared\nsum is multiplied out, however, we can compute the kernel as\n\n1\n\nzi(r) = exp(cid:18)(cid:0)\n\nwhere\n\n(cid:13) (cid:0)kxik2 (cid:0) 2z0\n\ni(r) + z00\n\nz0\n\ni(r) = X(cid:1)r2P\n\nxi;(cid:1)rx(r + (cid:1)r)\n\ni (r)(cid:1)(cid:19) ;\ni (r) = X(cid:1)r2P\n\nand z00\n\n(6)\n\n(7)\n\nx(r + (cid:1)r)2 :\n\nNow we are left with linear (cid:12)ltering operations only, the two cross correlations\nz0 and z00, which can be e(cid:14)ciently computed in the frequency domain, where the\ncross correlation of a signal with some (cid:12)lter becomes a multiplication of the signal\u2019s\nspectrum with the conjugate complex spectrum of the (cid:12)lter. This operation is\nso much faster that the additional computation cost of the Fourier transform is\nneglectable. Note that in fact z00 is the cross correlation of x2 with the (cid:12)lter o,\nwhich is 1 for all (cid:1)r 2 P . We need to compute the following Fourier transforms:\n\nX(j!) (cid:17) F[x(r)] ;\nXi(j!) (cid:17) F[xi(r)] ;\n\nX (2)(j!) (cid:17) F[x2(r)] ;\nO(j!) (cid:17) F[o(r)] :\n\n(8)\n\n\fxi(r) and o(r) are the (cid:12)lters xi and o, zero (cid:12)lled for r =2 P to the size of the\nimage.\nIt is necessary to take care of the placement of the origin (cid:1)r = 0 and\nthe mapping of negative o(cid:11)sets in P , which depends on the implementation of the\nFourier transform. Now zi is easily computed as\n\nzi(r) (cid:17) exp(cid:18)(cid:0)\n\n1\n\n(cid:13) (cid:16)xT\n\ni xi (cid:0) F (cid:0)1h2X C\n\ni (j!)X(j!) (cid:0) OC(j!)X (2)(j!)i(cid:17)(cid:19) (9)\n\nwhere ((cid:1))C indicates the conjugate complex. Using Fast Fourier Transform (FFT),\nthe speed improvement is much higher when the size of x is even in terms of powers\nof 2 [8]. Thus one should consider enlarging the image size by adding the appropriate\nnumber of zeros at the border. However this can lead to large overhead regions,\nwhen the image size is not close to the next power of 2. For this reason we use a tiling\nscheme, which processes the image in smaller parts of even size, which can cover\nthe entire image more closely. It is important to be aware of the distorted margins\nof the image or its tiles, when (cid:12)ltering is done in the frequency domain. Because\nthe cross correlation in the frequency domain is cyclic, points at the margin, for\nwhich the neighborhood P exceeds the image boundaries, have incorrect values in\nthe (cid:12)lter\u2019s output. This is particularly important for the tiling scheme, which has to\nprovide su(cid:14)cient overlap for the tiles, so that the image can be covered completely\nwith the uncorrupted inner parts of the tiles. Table 1 summarizes the speed-up\ngain for the described (cid:12)ltering method. Most performance gain is obtained through\nthe (cid:12)ltering in the frequency domain. However, splitting the image into tiles of\nappropriate size can improve speed even further.\n\nTable 1: Computation time examples for di(cid:11)erent (cid:12)ltering methods.\n\n(cid:12)ltering acc. to (5)\n\nFFT (cid:12)ltering, whole image\n\n6d 10h\n\n55m\n\nFFT (cid:12)ltering, tiles of 256 (cid:2) 256\n\n24m\n\n(cid:15) image size 1686 (cid:2) 1681 pixel\n(cid:15) 200 SV of 50 (cid:2) 50 pixels size\n\n(cid:15) implementation in MATLAB\n(cid:15) SUN F6800 / 750MHz, 1 CPU\n\n4 Experiments\n\nTo test the performance of the method we used two images of wild type and one\nof mutant photoreceptor terminals. The pro(cid:12)les of the terminals contain typically\nabout 100 synaptic vesicles, the number of which could di(cid:11)er if the genes for mem-\nbrane tra(cid:14)cking are mutated. Detecting such numerical di(cid:11)erences is a simple but\ntedious task best suited to a computational approach. The wild type images came\nfrom electron micrographs of the same animal under the same physiological condi-\ntions. For all images visual identi(cid:12)cation and hand written labelings of the vesicles\nwere made. Image ter01 (Fig. 1, left) was used for training. The validation error\non ter04 (Fig. 1, right) was considered for model selection. Then the best model\nwas tested on the mutant image (Fig. 2).\n\n4.1 Construction of the Target\n\nter01 contains 286 hand-labeled vesicles at discrete positions. To generate a smooth\ntarget image y, circular gauss blobs with (cid:27)2 = 40 and a peak value of 1 were\nplaced on every label. Now training examples x(r) where generated from ter01 by\n\n\fFigure 1: EM images of photoreceptor terminals of the wild type fruit (cid:13)y, Drosophila\nmelanogaster. The left image (ter01) was used for training, the right image (ter04)\nfor validation. Arrow: individual synaptic vesicle, 30nm in diameter.\n\ntaking square patches, centered around r. We have set the patch size P = 50 (cid:2) 50\npixels, to cover an entire vesicle plus a little surrounding. The corresponding values\ny(r) of the target image where used as targets for regression. The most complete\ntraining set would clearly contain patches from all locations, which however would be\ncomputationally unfeasible. Instead we used patches from all hand-label positions\nand additionally 2000 patches from random positions. No patches exceeded the\nimage boundaries. With these data the SVM was trained. We used the libsvm\nimplementation [9] which also contains, beside others, the (cid:23)-SVR. Mainly three\nparameters have to be adjusted for training the (cid:23)-SVR: the width of the RBF kernel\n(cid:13) and the parameters (cid:23) and C. Since the training dataset is small compared to the\ninput dimensionality, the validation error on ter04 is subject to large variance.\nTherefore we cannot give a complete parameter exploration here, but we would\nexpect a model with not too much complexity to give the best generalization. It\nturned out that, for the given conditions, a kernel size of (cid:13) = 20:000 together with\na low value (cid:23) = 0:1 and C = 0:01 yield good validation results on ter04. The\noptimization returned 245 SVs, 185 of which where outliers. The kernel width is\nlarge compared with the average distance of the training examples in input space,\nwhich was < 2:000. Because the computation time of the (cid:12)lter grows linearly with\nthe number of SVs, we are strongly interested in a solution with only few SVs. This\nrequires small values of (cid:23), since it is a lower bound on the fraction of SVs. At the\nsame time, small (cid:23) values provide large \" and hence restrict the model complexity.\nAfter (cid:12)ltering, the decision which point in z corresponds to a vesicle, has to be\nmade. Although the regions of high amplitude form sharp peaks, they still have\nsome spatial extension. Therefore we (cid:12)rst discriminate for the peak locations and\nthen for the amplitude. In a (cid:12)rst step, we determine those locations r, for which\nz(r) is a local maximum in some neighborhood, which is determined roughly by the\nsize of a vesicle, i.e. we consider the set\n\nQd =(cid:26)r : z(r) =\n\nmax\n\nf(cid:1)r:kr(cid:0)(cid:1)rk(cid:20)dg\n\nz(r + (cid:1)r)(cid:27) :\n\n(10)\n\n\fThen a threshold is applied to the candidates in Qd to yield the set of locations,\nwhich are considered as detected vesicles,\n\nQ(cid:18) = fr 2 Qd : z(r) > (cid:18)g :\n\n(11)\n\nWe set the parameter d = 15 constant in our experiments, and will vary only the\nthreshold (cid:18).\n\n4.2 Performance Evaluation\n\nTo evaluate the performance of the method, the set of detected vesicles Q(cid:18) must be\ncompared with set QExp, which contains the locations detected by a human expert.\nClearly this is only meaningful when done on data which was not used to train the\nSVM. We note that the location of the same vesicle may vary slightly in Q(cid:18) and\nQExp, due to (cid:13)uctuations in the manual labeling, for example. So we need to (cid:12)nd\nthe set Qmatch, containing pairs (r1; r2) with r1 2 Q(cid:18); r2 2 QExp, so that r1 and r2\nare close to each other and describe the location of the same vesicle. We compute\nthis with a simple, greedy but fast algorithm:\n\n(cid:15) compute the matrix Dij = kri (cid:0) rjk for all ri 2 Q(cid:18); rj 2 QExp\n(cid:15) while Dij = min D (cid:20) dm\n\n{ put (ri; rj) into Qmatch\n{ (cid:12)ll i-th row and j-th column of D with +1\n\nThe resulting pairs of matching locations are closer than dm, which should be set\napproximately to the radius of a vesicle. This algorithm does not generally (cid:12)nd\nthe global optimal assignment, which would be a NP-complete problem, but for\nlow point densities the error made by this algorithm is usually low. Now we can\nevaluate the fraction of correctly detected and the fraction of false positives,\n\nfc =\n\n#Qmatch\n#QExp\n\n;\n\nff p = 1 (cid:0)\n\n#Qmatch\n\n#Q(cid:18)\n\n;\n\n(12)\n\nwhere # denotes the cardinality of the set. Depending on the threshold (cid:18), #Q(cid:18)\nmay change and so does #Qmatch. So we get di(cid:11)erent values for fc and ff p. We\nsummarize these two rates in a diagram, which we call, following [10], Receiver\nOperating Characteristic (ROC). In comparison to [10], fc represents the hit rate\nand ff p represents the false alarm rate, cf. Fig. 3. However, our ROC di(cid:11)ers in\nsome aspects. fc does not need to reach 1 for arbitrary low thresholds, as it is\nrestricted by the set Qd, which does not need to contain a match to all elements\nof QExp. Furthermore, raising the threshold (decreasing #Q(cid:18)) may occasionally\nincrease #Qmatch due to the greedy matching algorithm. These artifacts yield\nnonmonotonic parts in the ROC. If no a priori costs are assigned to fc and ff p,\nthen a natural measure for quality is the area below the ROC, which would be close\nto 1 at best, and 0 if no match would be contained in Qd.\n\n4.3 Results\n\nThe ROC of the validation with ter04 and mutant is shown in Fig. 3. The rates\nfc and ff p were computed for 50 di(cid:11)erent threshold values, covering the interval\n[minr2Qd z(r); maxr2Qd z(r)]. For ter04 there exist four, and for mutant two, hu-\nman expert labelings. Therefore we can plot either four or two curves, respectively,\nand get an impression about the variance of our performance measure, the area\nbelow the curve. Furthermore the multiple hand labelings allow us to plot them\n\n\fFigure 2: left: Photoreceptor terminal a of mutant type (mutant). right: Close up\nof the left panel, showing labels set by a human (+) and labels found by our method\n((cid:3)). Threshold (cid:18) was 0.3, which yields fc (cid:25) 1 (cid:0) ff p in this case.\n\nagainst each other in the same (cid:12)gure (single crosses). They indicate what perfor-\nmance is achievable at best. A curve passing these points can be considered to\ndo the task as well on average as a human does. One can see that for the wild\ntype image the curve gets close to that region. For the mutant the performance is\nslightly worse, in terms of the area. In mutants not only the number of vesicles, but\ntypically also their shape and appearance di(cid:11)er. This variability was not covered\nby the training set and had to be generalized from the wild type data.\n\n5 Discussion\n\nWe showed that SVR, used as a nonlinear (cid:12)lter, was able to detect synaptic vesicles\nin electron micrographs with high accuracy. On the one hand, for good performance\nthe ability of the SVR to learn the input/output mapping properly is crucial. On\nthe other hand it is necessary that in the input image a small neighborhood contains\nsu(cid:14)cient information about the target. Due to the \\curse of dimensionality\" (cf.\n[5]) the receptive (cid:12)eld P must not be too large, unless there is a huge amount of\ntraining data. A smaller input dimension P would make the learning easier, but if\nP is too small the information that x(r) contains about y(r) may be too small and\nthe performance poor. For the presented application patch size P = 50 (cid:2) 50 was a\ngood tradeo(cid:11). Note that, since we do the (cid:12)ltering in the frequency domain, the size\nof P has, in contrast to the number of SVs, no direct in(cid:13)uence on the computation\ntime needed for (cid:12)ltering. Thus, we have a 2500 dimensional input space and only\n286 points in this space, that describe a vesicle. Clearly, only a model with low\ncomplexity would achieve acceptable generalization, and this is what we used. In\nfact the best linear SVR, i.e. the best linear (cid:12)lter, which has an even much lower\ncomplexity, still yields a performance of Ater04 = 0:82 and Amutant = 0:74 (cf.\nFig. 3, Ater04 = 0:85 : : : 0:89, Ater04 = 0:76 : : : 0:83). However, for future work we\nplan to extend the training set signi(cid:12)cantly. To do so, we have access to hand\nlabelings for a broad variety of images of di(cid:11)erent mutants, also including slightly\ndi(cid:11)erent scalings. With such more training data the nonlinear SVR can get more\ncomplex without loss of generalization performance. The capacity of the linear (cid:12)lter,\nhowever, cannot grow any further. Thus we expect the performance gap between\nnonlinear and linear (cid:12)ltering to grow signi(cid:12)cantly.\n\n\f1\n\n0.8\n\n0.6\n\n0.4\n\n0.2\n\n0\n\n0\n\n1\n\n0.9\n\n0.8\n\n0.7\n\n0.6\n\n0.5\n\n0.4\n\n0.3\n\n0.2\n\n0.1\n\nA\n = 0.863\n1\nA\n = 0.848\n2\nA\n = 0.838\n3\nA\n = 0.889\n4\n\n0.2\n\n0.4\n\n0.6\n\n0.8\n\n1\n\nA\n=0.826 \n1\nA\n=0.765 \n2\n\n0\n\n0.2\n\n0.4\n\n0.6\n\n0.8\n\n1\n\nFigure 3: ROC of the validation with ter04 (left) and with mutant (right). For\nvarious thresholds (cid:18), fc is plotted on the x-axis versus 1 (cid:0) ff p on the y-axis. The\nsingle crosses show the fraction of matching labels for every pair of hand labels of\nter04. For detailed explanation, see text.\n\nAcknowledgments\n\nSupport Contributed By: BMBF grant 0311559 (R.V., M.S., K.O.) and NIH grant\nEY-03592; Killam Trust (I.A.M.)\n\nReferences\n\n[1] Bernhard Sch(cid:127)olkopf and Alexander J. Smola. Learning with Kernels. The MIT Press,\n\n2002.\n\n[2] R.S. Stowers and T.L. Schwarz. A genetic method for generating drosophila eyes\ncomposed exclusively of mitotic clones of a single genotype. Genetics, (152):1631{\n1639, 1999.\n\n[3] R. Fabian-Fine, P. Verstreken, P.R. Hiesinger, J.A. Horne, R. Kostyleva, H.J. Bellen,\nand I.A. Meinertzhagen. Endophilin acts after synaptic vesicle (cid:12)ssion in drosophila\nphotoreceptor terminals. J. Neurosci., 2003. (in press).\n\n[4] Simon S. Haykin. Neural Networks: A Comprehensive Foundation. Prentice Hall,\n\n1998.\n\n[5] Vladimir Vapnik. The Nature of Statistical Learning Theory. 1995.\n[6] B. Sch(cid:127)olkopf and A. Smola and R. Williamson and P. Bartlett. New support vector\n\nalgorithms. Neural Computation, 12(5):1207{1245, May 2000.\n\n[7] J. Mercer. Functions of positive and negative type and their connection with the the-\nory of integral equations. Philosophical Transactions of the Royal Society of London\nA, 209:415{446, 1909.\n\n[8] William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery.\n\nNumerical Recipes in C. Cambridge University Press, 2nd. edition, 1992.\n\n[9] Chih-Chung Chang and Chih-Jen Lin. LIBSVM { A Library for Support Vector\n\nMachines. http://www.csie.ntu.edu.tw/~cjlin/libsvm/, April 2003.\n\n[10] L. O. Harvey, Jr. The critical operating characteristic and the evaluation of expert\njudgment. Organizational Behavior and Human Decision Processes, 53(2):229{251,\n1992.\n\n\f", "award": [], "sourceid": 2502, "authors": [{"given_name": "Roland", "family_name": "Vollgraf", "institution": null}, {"given_name": "Michael", "family_name": "Scholz", "institution": null}, {"given_name": "Ian", "family_name": "Meinertzhagen", "institution": null}, {"given_name": "Klaus", "family_name": "Obermayer", "institution": null}]}