{"title": "Discriminating Deformable Shape Classes", "book": "Advances in Neural Information Processing Systems", "page_first": 1491, "page_last": 1498, "abstract": "", "full_text": "Discriminating deformable shape classes\n\nS. Ruiz-Correay, L. G. Shapiroy, M. Meil\u02d8az and G. Berson$\n\nyDepartment of Electrical Engineering\n\nzDepartment of Statistics\n\n$Division of Medical Genetics, School of Medicine\n\nUniversity of Washington, Seattle, WA 98105\n\nAbstract\n\nWe present and empirically test a novel approach for categorizing 3-D free form ob-\nject shapes represented by range data . In contrast to traditional surface-signature based\nsystems that use alignment to match speci\ufb01c objects, we adapted the newly introduced\nsymbolic-signature representation to classify deformable shapes [10]. Our approach con-\nstructs an abstract description of shape classes using an ensemble of classi\ufb01ers that learn\nobject class parts and their corresponding geometrical relationships from a set of numeric\nand symbolic descriptors. We used our classi\ufb01cation engine in a series of large scale dis-\ncrimination experiments on two well-de\ufb01ned classes that share many common distinctive\nfeatures. The experimental results suggest that our method outperforms traditional numeric\nsignature-based methodologies. 1\n\n1\n\nIntroduction\n\nCategorizing objects from their shape is an unsolved problem in computer vision that en-\ntails the ability of a computer system to represent and generalize shape information on the\nbasis of a \ufb01nite amount of prior data. For automatic categorization to be of practical value,\na number of important issues must be addressed. As pointed out in [10], how to construct\na quantitative description of shape that accounts for the complexities in the categorization\nprocess is currently unknown. From a practical prospective, human perception, knowledge,\nand judgment are used to elaborate qualitative de\ufb01nitions of a class and to make distinctions\namong different classes. Nevertheless, categorization in humans is a standing problem in\nNeurosciences and Psychology, and no one is certain what information is utilized and what\nkind of processing takes place when constructing object categories [8]. Consequently, the\ntask of classifying object shapes is often cast in the framework of supervised learning.\n\nMost 3-D object recognition research in computer vision has heavily used the alignment-\nveri\ufb01cation methodology [11] for recognizing and locating speci\ufb01c objects in the context\nof industrial machine vision. The number of successful approaches is rather diverse and\nspans many different axes . However, only a handful of studies have addressed the prob-\nlem of categorizing shapes classes containing a signi\ufb01cant amount of shape variation and\nmissing information frequently found in real range scenes. Recently, Osada et al. [9] de-\nveloped a shape representation to match similar objects. The so-called shape distribution\nencodes the shape information of a complete 3-D object as a probability distribution sam-\npled from a shape function. Discrimination between classes is attempted by comparing\na deterministic similarity measure based on a Lp norm. Funkhouser et al. [1] extended\nthe work on shape distribution by developing a representation of shape for object retrieval.\n\n1This research is based upon work supported by NSF Grant No. IIS-0097329 and NIH Grant No.\nP20LM007714. Any opinions, \ufb01ndings and conclusions or recomendations expressed in this material\nand those of the autors do not necessarily re\ufb02ects the views of NSF o NIH.\n\n\fThe representation is based on a spherical harmonics expansion of the points of a polyg-\nonal surface mesh rasterized into a voxel grid. Query objects are matched to the database\nusing a nearest neighbor classi\ufb01er. In [7], Martin et al. developed a physical model for\nstudying neuropathological shape deformations using Principal Component Analysis and a\nGaussian quadratic classi\ufb01er. Golland [2] introduced the discriminative direction for kernel\nclassi\ufb01ers for quantifying morphological differences between classes of anatomical struc-\ntures. The method utilizes the distance-transform representation to characterize shape, but\nit is not directly applicable to range data due to the dependence of the representation on the\nglobal structure of the objects. In [10], we developed a shape novelty detector for recog-\nnizing classes of 3-D object shapes in cluttered scenes. The detector learns the components\nof a shapes class and their corresponding geometric con\ufb01guration from a set of surface sig-\nnatures embedded in a Hilbert space. The numeric signatures encode characteristic surface\nfeatures of the components, while the symbolic signatures describe their corresponding\nspatial arrangement.\n\nThe encouraging results obtained with our novelty detector motivated us to take a step\nfurther and extend our algorithm to accommodate classi\ufb01cation by developing a 3-D shape\nclassi\ufb01er to be described in the next section. The basic idea is to generalize existing surface\nrepresentations that have proved effective in recognizing speci\ufb01c 3-D objects to the prob-\nlem of object classes by using a \u201csymbolic\u201d representation that is resistant to deformation\nas opposed to a numeric representation that is tied to a speci\ufb01c shape. We were also mo-\ntivated by applications in medical diagnosis and human interface design where 3-D shape\ninformation plays a signi\ufb01cant role. Detecting congenital abnormalities from craniofacial\nfeatures [3], identifying cancerous cells using microscopic tomography, and discriminating\n3-D facial gestures are some of the driving applications.\n\nThe paper is organized as follows. Section 2 describes our proposed method. Section 3 is\ndevoted to the experimental results. Section 4 discusses relevant aspects of our work and\nconcludes the paper.\n\n2 Our Approach\n\nWe develop our shape classi\ufb01er in this section. For the sake of clarity we concentrate\non the simplest architecture capable of performing binary classi\ufb01cation. Nevertheless, the\napproach admits a straightforward extension to a multi-class setting. The basic architecture\nconsists of a cascade of two classi\ufb01cation modules. Both modules have the same structure\n(a bank of novelty detectors and a multi-class classi\ufb01er) but operate on different input\nspaces. The \ufb01rst module processes numeric surface signatures and the second, symbolic\nones. These shape descriptors characterize our classes at two different levels of abstraction.\n2.1 Surface signatures\nThe surface signatures developed by Johnson and Hebert [5] are used to encode surface\nshape of free form objects. In contrast to the shape distributions and harmonic descriptors,\ntheir spatial scale can be enlarged to take into account local and non-local effects, which\nmakes them robust against the clutter and occlusion generally present in range data. Exper-\nimental evidence has shown that the spin image and some of its variants are the preferred\nchoice for encoding surface shape whenever the normal vectors of the surfaces of the ob-\njects can be accurately estimated [11]. The symbolic signatures developed in [10] are used\nat the next level to describe the spatial con\ufb01guration of labeled surface regions.\nNumeric surface signatures. A spin-image [5] is a two-dimensional histogram computed\nat an oriented point P of the surface mesh of an object (see Figure 1). The histogram accu-\nmulates the coordinates (cid:11) and (cid:12) of a set of contributing points Q on the mesh. Contributing\npoints are those that are within a speci\ufb01ed distance of P and for which the surface normal\nforms an angle of less than the speci\ufb01ed size with the surface normal N of P . This angle is\ncalled the support angle. As shown in Figure 1, the coordinate (cid:11) is the distance from P to\n\n\fSurface\nMesh\n\nN\n\nCoordinate\n\nSystem\n\nN\n\nQ\n\nSpin \nImage\n\npT\n\nP\n\nFigure 1: The spin image for point P is constructed by accumulating in a 2-D histogram the co-\nordinates (cid:11) and (cid:12) of a set of contributing points (such as Q) on the mesh representing the object.\n\nP\n\nthe projection of Q onto the tangent plane TP at point P ; (cid:12) is the distance from Q to this\nplane. We use spin images as the numeric signatures in this work.\nSymbolic surface signatures Symbolic surface signatures (Fig. 2) are somewhat related\nto numeric surface signatures in that they also start with a point P on the surface mesh and\nconsider a set of contributing points Q, which are still de\ufb01ned in terms of the distance from\nP and support angle. The main difference is that they are derived from a labeled surface\nmesh (shown in Figure 2a); each vertex of the mesh has an associated symbolic label ref-\nerencing a surface region or component in which it lies. The components are constructed\nusing a region growing algorithm to be described in Section 2.2. For symbolic surface\nsignature construction, the vector P Q in Figure 2b is projected to the tangent plane at P\nwhere a set of orthogonal axes (cid:13) and (cid:14) have been de\ufb01ned. The direction of the (cid:14) (cid:0) (cid:13) axes is\narbitrarily de\ufb01ned since no curvature information was used to specify preferred directions.\nThis ambiguity is resolved by the methods described in Section 2.2. The discretized version\nof the (cid:13) and (cid:14) coordinates of P Q are used to index a 2D array, and the indexed position\nof the array is set to the component label of Q. Note thst it is possible that multiple points\nQ that have different labels project into the same bin. In this case, the label that appeared\nmost frequently is aasigned to the bin. The resultant array is the symbolic surface signature\nat point P . Note that the signature captures the relationships among the labeled regions on\nthe mesh. The signature is shown as a labeled color image in Figure 2c.\n\nFigure 2: The symbolic surface signature for point P on a labeled surface mesh model of a human\nhead. The signature is represented as a labeled color image for illustration purposes.\n2.2 Classifying shape classes\n\nWe consider the classi\ufb01cation task for which we are given a set of l surface meshes\nC = fC1; (cid:1) (cid:1) (cid:1) ; Clg representing two classes of object shapes. Each surface mesh is la-\nbeled by y 2 f(cid:6)1g. The problem is to use the given meshes and the labels to construct an\nalgorithm that predicts the label y of a new surface mesh C. We let C+1 (C(cid:0)1) denote the\nshape class labeled with y = +1 (y = (cid:0)1, respectively). We start by assuming that the\ncorrespondences between all the points of the instances for each class Cy are known. This\ncan be achieved by using a morphable surface models technique such as the one described\nin [10].\n\nFinding shape class components\n\nBefore shape class learning can take place, the salient feature components associated with\nC+1 and C(cid:0)1 must be speci\ufb01ed . Each component of a class is identi\ufb01ed by a particular\n\nb\na\na\nb\n\fregion located on the surface of the class members. For each class C+1 and C(cid:0)1 the com-\nponents are constructed one at a time using a region growing algorithm. This algorithm\niteratively constructs a classi\ufb01cation function (novelty detector), which captures regions\nin the space of numeric signatures S that approximately correspond to the support of an\nassumed probability distribution function FS associated with the class component under\nconsideration. In this context, a shape class component is de\ufb01ned as the set of all mesh\npoints of the surface meshes in a shape class whose numeric signatures lie inside of the\nsupport region estimated by the classi\ufb01cation function. The region growing algorithm pro-\nceeds as follows.\n\nFigure 3: The component R was grown around the critical point p using the algorithm described in\nthe text. Six typical models of the training set are shown. The numeric signatures for the critical point\np of \ufb01ve of the models are also shown. Their image width is 70 pixels and its region of in\ufb02uence\ncovers about three quarters of the surface mesh models .\n\nStep I (Region Growing) . The input of this phase is a set of surface meshes that are samples of an\nobject class Cy.\n\n1. Select a set of critical points on a training object for class Cy. Let my be the number of critical\npoints per object. The number my and the locations of the critical points are chosen by hand at this\ntime. Note that the critical points chosen for class C+ can differ from the critical points chosen for\nclass C(cid:0).\n\n2. Use known correspondences to \ufb01nd the corresponding critical points on all training instances in C\nbelonging to Cy .\n\n3. For each critical point p of a class Cy, compute the numeric signatures at the corresponding points\nof every training instance of C\ny; this set of signatures is the training set Tp;y for critical point p of\nclass Cy.\n\ny, train a component detector (implemented as a (cid:23)-SVM\n4. For each critical point p of class C\nnovelty detector [12]) to learn a component about p, using the training set Tp;y. The component\ndetector will actually grow a region about p using the shape information of the numeric signatures\nin the training sample. The regions are grown for each critical point individually using the following\ngrowing phase. Let p be one of the m critical points. The performance of the component detector\nfor point p can be quanti\ufb01ed by calculating a bound on the expected probability of error E on the\nyj, where #SVp is the number of support vectors in the component\ntarget set as E = #SVp=jC\ndetector for p, and jC\nyj the number of elements with label y in C. Using the classi\ufb01er for point p,\nperform an iterative component growing operation to expand the component about p. Initially, the\ncomponent consists only of point p. An iteration of the procedure consists of the following steps. 1)\nSelect a point that is an immediate neighbor of one of the points in the component and is not yet in\nthe component. 2) Retrain the classi\ufb01er with the current component plus the new point. 3) Compute\nthe error E 0 for this classi\ufb01er. 4) If the new error E0 is lower than the previous error E, add the new\npoint to the component and set E = E 0. 5) This continues until no more neighbors can be added\nto the component. This region growing approach is related to the one used by Heisele et al. [4]\nfor categorizing objects in 2-D images. Figure 3 shows an example of a component grown by this\ntechnique about critical point p on a training set of 200 human faces from the University of South\nFlorida database.\n\nAt the end of step I, there are my component detectors, each of which can identify the\ncomponent of a particular critical point of the object shape class Cy. That is, when applied\nto a surface mesh, each component detector will determines which vertices it thinks belong\nto its learned component (positive surface points), and which vertices do not.\n\n\fStep II. The input of this step is the training set of numeric signatures and their corresponding labels\nfor each of the m = m+1 + m(cid:0)1 components. The labels are determined by the step-I component\ndetectors previously applied to C+1 and C(cid:0)1. The output is a component classi\ufb01er (multi-class (cid:23)-\nSVM) that, when given a positive surface point of a surface mesh previously processed with the bank\nof component detectors, will determine the particular component of the m components to which this\npoint belongs.\n\nLearning spatial relationships\nThe ensemble of component detectors and the component classi\ufb01er described above de\ufb01ne\nour classi\ufb01cation module mentioned at the beginning of the section. A central feature\nof this module is that it can be used for learning the spatial con\ufb01guration of the labeled\ncomponents just by providing as input the set C of training surface meshes with each vertex\nlabeled with the label of its component or zero if it does not belong to a component. The\nalgorithm proceeds in the same fashion as described above except that the classi\ufb01ers operate\non the symbolic surface signatures of the labeled mesh. The signatures are embedded in\na Hilbert space by means of a Mercer kernel that is constructed as follows. Let A and\nB be two square matrices of dimension N storing arbitrary labels. Let A (cid:3) B denote a\nbinary square matrix whose elements are de\ufb01ned as [A (cid:3) B]ij = match ([A]ij; [B]ij) ;\nwhere match(a; b) = 1 if a = b, and 0 otherwise. The symmetric mapping < A; B >=\n(1=N 2) Pij[A (cid:3) B]ij, whose range is the interval [0; 1], can be interpreted as the cosine\nof angle (cid:18)AB between two unit vectors on the unit sphere lying within a single quadrant.\nThe angle (cid:18)AB is the geodesic distance between them. Our kernel function is de\ufb01ned as\nk(A; B) = exp((cid:0)(cid:18)2\nSince symbolic surface signatures are de\ufb01ned up to a rotation, we use the virtual SV method\nfor training all the classi\ufb01ers involved. The method consists of training a component detec-\ntor on the signatures to calculate the support vectors. Once the support vectors are obtained,\nnew virtual support vectors are extracted from the labeled surface mesh in order to include\nthe desired invariance; that is, a number r of rotated versions of each support vector is gen-\nerated by rotating the (cid:14) (cid:0) (cid:13) coordinate system used to construct each symbolic signature\n(see Fig. 2). Finally, the novelty detector used by the algorithm is trained with the enlarged\ndata set consisting of the original training data and the set of virtual support vectors.\n\nAB=(cid:27)2):\n\nThe worse case complexity of the classi\ufb01cation module is O(nc2s), where n is the num-\nber of vertices of the input mesh, s is the size of the input signatures (either numeric or\nsymbolic) and c is the number of novelty detectors. In the classi\ufb01cation experiments to be\ndescribed below, typical values for n, s and c are 10; 000, 2; 500 and 8 , respectively.\nA classi\ufb01cation example\nAn architecture capable of discriminating two shape classes consists of a cascade of two\nclassi\ufb01cation modules. The \ufb01rst module identi\ufb01es the components of each shape class,\nwhile the second veri\ufb01es the geometric consistency (spatial relationships) of the compo-\nnents. Figure 4 illustrates the classi\ufb01cation procedure on two sample surface meshes from\na test set of 200 human heads. The \ufb01rst mesh (Figure 4 a) belongs to the class of healthy in-\ndividuals, while the second (Figure 4 e) belongs to the class of individuals with a congenital\nsyndrome that produces a pathological craniofacial deformation. The input classi\ufb01cation\nmodule was trained with a set of 400 surface meshes and 4 critical points per class to rec-\nognize the eight components shown in Figure 4 b and f. The \ufb01rst four components are\nassociated with healthy heads and the rest with the malformed ones. Each of the test sur-\nface meshes was individually processed as follows. Given an input surface mesh to the\n\ufb01rst classi\ufb01cation module, the classi\ufb01er ensemble (component detectors and components\nclassi\ufb01er) is applied to the numeric surface signatures of its points (Figure 4 a and e). A\nconnected components algorithm is then applied to the result and components of size below\na threshold (10 mesh points) are discarded. After this process the resulting labeled mesh is\nfed to the second classi\ufb01cation module that was trained with 400 labeled meshes and two\n\n\fcritical points to recognize two new components. The \ufb01rst component was grown around\nthe point P in Figure 4 a. The second component was grown around point Q in Figure 4 e.\nThe symbolic signatures inside the region around P encode the geometric con\ufb01guration of\nthree of the four components learned by the \ufb01rst module (healthy heads), while the sym-\nbolic signatures around Q encode the geometric con\ufb01guration of three of the remaining\nfour components (malformed heads), Figure 4 b and f . Consequently, the points of the\noutput mesh of the second module will be set to \u201c+1\u201d if they belong to learned symbolic\nsignatures associated with the healthy heads (Figure 4 c) , and \u201c-1\u201d otherwise (Figure 4 g).\nFinally, the \ufb01ltering algorithms described above are applied to the output mesh. Figure 4 c\n(g) shows the region found by our algorithm that corresponds to the shape class model of\nnormal (respectively abnormal) head.\n\nFigure 4: Binary classi\ufb01cation example. a) and e) Mesh models of normal and abnormal heads,\nrespectively. b) and f) Output of the \ufb01rst classi\ufb01cation module. Components 1-4 are associated with\nhealthy individuals while components 5-8, with unhealthy ones. Labeled points outside the bounded\nregions correspond to false positives. c) and g) Output of the second classi\ufb01cation module. d) and\nh) Normalized classi\ufb01er margin of the components associated with the second classi\ufb01cation module.\nRed points represent high con\ufb01dence values while blue points represent low values.\n3 Experiments\nWe used our classi\ufb01er in a series of discrimination tasks with deformable 3-D human heads\nand faces. All data sets were split into training and testing samples. For classi\ufb01cation with\nhuman heads the data consisted of 600 surface mesh models (400 training samples and\n200 testing samples). The models had a resolution of 1 mm ((cid:24) 30; 000 points) . For the\nfaces, the data sets consisted of 300 surface meshes (200 training samples and 100 testing\nsamples). The corresponding mesh resolution was set to about 0.8 mm ((cid:24) 70; 000 points).\nAll the surface models considered here were obtained from range data scanners and all the\ndeformable models were constructed using the methods described in [10].\n\nWe tested the stability in the formation of shape class components using the faces data\nset. This set contains a signi\ufb01cant amount of shape variability.\nIt includes models of\nreal subjects of different gender, race, age (young and mature adults) and facial gesture\n(smiling vs. neutral). Typical samples are shown in Figure 3. The \ufb01rst module of our clas-\nsi\ufb01er must generate stable components to allow the second module to discriminate their\ncorresponding geometric con\ufb01gurations. We trained the \ufb01rst classi\ufb01cation module with a\nset of 200 faces using critical points arbitrarily located on the cheek, chin, forehead and\nphiltrum of the surface models. The trained module was then applied to the testing faces to\nidentify the corresponding components. The component associated with the forehead was\ncorrectly identi\ufb01ed in 86% of the testing samples. This rate is reasonably high considering\nthe amount of shape variability in the data set (Fig. 3). The percentage of identi\ufb01ed compo-\nnents associated with the cheek, chin and philtrum were 86%, 89% and 82%, respectively.\n\nWe performed classi\ufb01cation of normal versus abnormal human heads, a task that often\n\n\foccurs in medical settings. The abnormalities considered are related to two genetic syn-\ndromes that can produce severe craniofacial deformities 2. Our goal was to evaluate the\nperformance of our classi\ufb01er in discriminating examples with two well-de\ufb01ned where a\nvery \ufb01ne distinction exists. In our setup, the classes share many common features. This\nmakes the classi\ufb01cation dif\ufb01cult even for a trained physician. In Task I, the classi\ufb01er at-\ntempted to discriminate between test samples that were 100% normal or 100% affected\nby each of the two model syndromes (Tasks I A and B). Task II was similar, except that\nthe classi\ufb01er was presented with examples with varying degrees of abnormality. The sur-\nface meshes of each of these examples were convex combinations of normal and abnormal\nheads. The degree of collusion between the resulting classes made the discrimination pro-\ncess more dif\ufb01cult. Our rationale was to drive a realistic task to its limit in order to evaluate\nthe discrimination capabilities of the classi\ufb01er. High discrimination power could be use-\nful to quantitatively evaluate cases that are otherwise dif\ufb01cult to diagnose, even by human\nstandards. The results of the experiments are summarized in Table 1. Our shape classi\ufb01er\nwas able to discriminate with high accuracy between normal and abnormal models. It was\nalso able to discriminate classes that share a signi\ufb01cant amount of common shape features\n( see II-B(cid:3) in Table 1).\n\nWe compared the performance of our approach with a signature-based method [11] that\nuses alignment for matching objects and is robust to scene clutter and occlusion. As we\nexpected, a pilot study showed that the signature-based method performs poorly in tasks\nI A and B with an average classi\ufb01cation rate close to 43%. The methods cited in the\nintroduction were not considered for direct comparison, because they use global shape\nrepresentations that were designed for classifying complete 3-D models. Our approach\nusing symbolic signatures can operate on single-view data sets containing partial model\ninformation, as shown by the experimental results performed on several shape classes [10].\n\nI-A (100% normal - 0% abnormal)\nI-B (100% normal - 0% abnormal)\nII-B (65% normal - 35% abnormal)\n\n98\n100\n98\n\nII-B (50% normal - 50% abnormal)\nII-B (cid:3) (25% normal - 75% abnormal)\nII-B (15% normal - 85% abnormal)\n\n97\n92\n48\n\nTable 1: Classi\ufb01cation accuracy rate (%) for discrimination between above test samples\nversus 100% abnormal test samples.\n4 Discussion and Conclusion\n\nWe presented a supervised approach to classi\ufb01cation of 3-D shapes represented by range\ndata that learns class components and their geometrical relationships from surface descrip-\ntors. We performed preliminary classi\ufb01cation experiments on models of human heads (nor-\nmal vs. abnormal) and studied the stability in the formation of class components using a\ncollection of real face models containing a large amount of shape variability. We obtained\npromising results. The classi\ufb01cation rates were high and the algorithm was able to grow\nconsistent class components despite the variance.\n\nWe want to stress which parts of our approach are essential as described and which are\nmodi\ufb01able. The numeric and symbolic shape descriptors considered here are important.\nThey are locally de\ufb01ned but they convey a certain amount of global information. For ex-\nample, the spin image de\ufb01ned on the forehead (point P) in Figure 3 encodes information\nabout the shape of most of the face (including the chin). As the image width increases, the\nspin image becomes more descriptive. Spin images and some variants [11] are reliable for\nencoding surface shape in the present context. Other descriptors such as curvature-based or\nharmonic signatures are not descriptive enough or lack robustness to scene clutter and oc-\nclusion. In the classi\ufb01cation experiments described above, we did not perform any kind of\nfeature selection for choosing the critical points. Nevertheless, the shape descriptors cap-\n\n2Test samples were obtained from models with craniofacial features based upon either the Greig\n\ncephalopolysyndactyly (A) or the trisomy 9 mosaic (B) syndromes [6].\n\n\ftured enough global information to allow a classi\ufb01er to discriminate between the distinctive\nfeatures of normal and abnormal heads.\n\nThe structure of the classi\ufb01cation module (bank of novelty detectors and multi-class clas-\nsi\ufb01er) is important. The experimental results showed us that the output of the novelty\ndetectors is not always reliable and the multi-class classi\ufb01er becomes critical for construct-\ning stable and consistent class components. In the context of our medical application, the\nperformance of our novelty detectors can be improved by incorporating prior information\ninto the classi\ufb01cation scheme. Maximum entropy classi\ufb01ers or an extension of the Bayes\npoint machines to the one class setting are being investigated as possible alternatives. The\nregion-growing algorithm for \ufb01nding class components is not critical. The essential point\nconsists of generating groups of neighboring surface points whose shape descriptors are\nsimilar but distinctive enough from the signatures of other components.\n\nThere are several issues to investigate. 1) Our method is able to model shape classes con-\ntaining signi\ufb01cant shape variance and can absorb about 20% of scale changes. A multi-\nresolution approach could be used for applications that require full scale invariance. 2)\nWe used large range data sets for training our classi\ufb01er. However, larger sets are required\nin order to capture the shape variability of the abnormal craniofacial features due to race,\nage and gender. We are currently collecting data from various medical sources to create\na database for implementing and testing a semi-automated diagnosis system. The data in-\ncludes 3-D models constructed from range data and CT scans. The usability of the system\nwill be evaluated by a panel of expert geneticists.\nReferences\n[1] T. Funkhouser, P. Min, M. Kazhdan, J. Chen, A. Halderman, D. Dobkin, and D. Jacobs \u201cA\nSearch Engine for 3D Models,\u201d ACM Transactions on Graphics, 22(1), pp. 83-105, January\n2003.\n\n[2] P. Golland \u201cDiscriminative Direction for Kernel Classi\ufb01ers,\u201d In: Advances in Neural Information\n\nProcessing Systems, 13, Vancouver, Canada, 745-752, 2001.\n\n[3] P. Hammond, T. J. Hunton, M. A. Patton, and J. E. Allanson. \u201cDelineation and Visualization of\nCongenital Abnormality using 3-D Facial Images,\u201d In:Intelligent Data Analysis in Medicine and\nPharmacology, MEDINFO, 2001, London.\n\n[4] B. Heisele, T. Serre, M. Pontil, T. Vetter and T. Poggio. \u201cCategorization by Learning and Com-\nbining Object Parts,\u201d In: Advances in Neural Information Processing Systems, 14, Vancouver,\nCanada, Vol. 2, 1239-1245, 2002.\n\n[5] A. E. Johnson and M. Hebert, \u201cUsing Spin Images for Ef\ufb01cient Object Recognition in Cluttered\n3D scenes,\u201d IEEE Trans. Pattern Analysis and Machine Intelligence, 21(5), pp. 433-449, 1999.\n[6] K. L. Jones, Smith\u2019s Recognizable Patterns of Human Malformation, 5th Ed. W.B. Saunders\n\nCompany, 1999.\n\n[7] J. Martin, A. Pentland, S. Sclaroff, and R. Kikinis, \u201cCharacterization of Neurophatological\nShape Deformations,\u201d IEEE Transactions on Pattern Analysis and Machine Intelligence,, Vol.\n2, No. 2, 1998.\n\n[8] D. L. Medin, C. M. Aguilar, Categorization. In R.A. Wilson and F. C. Keil (Eds.). The MIT\n\nEncyclopedia of the Cognitive Sciences, Cambridge, MA, 1999.\n\n[9] R. Osada, T. Funkhouser, B. Chazelle, and D. Dobkin, \u201cMatching 3-D models with shape distri-\n\nbutions,\u201d Shape Modeling International, 2001, pp. 154-166.\n\n[10] S. Ruiz-Correa, L. G. Shapiro, and M. Meil\u02d8a. \u201cA New Paradigm for Recognizing 3-D Object\nShapes from Range Data,\u201d Proceedings of the IEEE Computer Society International Conference\non Computer Vision 2003, Vol.2, pp. 1126-1133.\n\n[11] S. Ruiz-Correa, L. G. Shapiro, and M. Meil\u02d8a, \u201cA New Signature-based Method for Ef\ufb01cient\n3-D Object Recognition,\u201d Proceedings of the IEEE Conference on Computer Vision and Pattern\nRecognition 2001, Vol. 1, pp. 769 -776.\n\n[12] B. Scholk\u00a8opf and A. J. Smola, Learning with Kernels, The MIT Press, Cambridge, MA, 2002.\n\n\f", "award": [], "sourceid": 2355, "authors": [{"given_name": "Salvador", "family_name": "Ruiz-correa", "institution": null}, {"given_name": "Linda", "family_name": "Shapiro", "institution": null}, {"given_name": "Marina", "family_name": "Meila", "institution": null}, {"given_name": "Gabriel", "family_name": "Berson", "institution": null}]}