{"title": "Geometric Analysis of Constrained Curves", "book": "Advances in Neural Information Processing Systems", "page_first": 1579, "page_last": 1586, "abstract": "", "full_text": "A Computational Geometric Approach to Shape\n\nAnalysis in Images\n\nAnuj Srivastava\n\nDepartment of Statistics\nFlorida State University\nTallahassee, FL 32306\n\nanuj@stat.fsu.edu\n\nWashington Mio\n\nDepartment of Mathematics\n\nFlorida State University\nTallahassee, FL 32306\nmio@math.fsu.edu\n\nXiuwen Liu\n\nDepartment of Computer Science\n\nFlorida State University\nTallahassee, FL 32306\nliux@cs.fsu.edu\n\nEric Klassen\n\nDepartment of Mathematics\n\nFlorida State University\nTallahassee, FL 32306\n\nklassen@math.fsu.edu\n\nAbstract\n\nWe present a geometric approach to statistical shape analysis of closed\ncurves in images. The basic idea is to specify a space of closed curves\nsatisfying given constraints, and exploit the differential geometry of this\nspace to solve optimization and inference problems. We demonstrate this\napproach by: (i) de\ufb01ning and computing statistics of observed shapes, (ii)\nde\ufb01ning and learning a parametric probability model on shape space, and\n(iii) designing a binary hypothesis test on this space.\n\n1\n\nIntroduction\n\nAn important goal in image understanding is to detect, track and label objects of interest\npresent in observed images. Imaged objects can be characterized in many ways: according\nto their colors, textures, shapes, movements, and locations. The past decade has seen sig-\nni\ufb01cant advances in the modeling and analysis of pixel values or textures to characterize\nobjects in images, albeit with limited success. On the other hand, planar curves that rep-\nresent contours of objects have been studied independently for a long time. An emerging\nopinion in the vision community is that global features such as shapes of contours should\nalso be taken into account for the successful detection and recognition of objects. A com-\nmon approach to analyzing curves in images is to treat them as level sets of functions,\nand algorithms involving such active contours are governed usually by partial differential\nequations (PDEs) driven by appropriate data terms and smoothness penalties (see for exam-\nple [10]). Regularized curve evolutions and region-based active contours offer alternatives\nin similar frameworks. This remarkable body of work contains various studies of curve\nevolution, each with relative strengths and drawbacks.\n\nIn this paper, we present a framework for the algorithmic study of curves, their varia-\ntions and statistics. In this approach, a fundamental element is a space of closed curves,\n\n\fwith additional constraints to impose equivalence of shapes under rotation, translation, and\nscale. We exploit the geometry of these spaces using elements such as tangents, normals,\ngeodesics and gradient \ufb02ows, to solve optimization and statistical inference problems for\na variety of cost functions and probability densities. This framework differs from those\nemployed in previous works on \u201cgeometry-driven \ufb02ows\u201d [8] in the sense that here both the\ngeometry of the curves and the geometry of spaces of curves are utilized. Here the dynam-\nics of active contours is described by vector \ufb01elds on spaces of curves. It is important to\nemphasize that a shape space is usually a non-linear, in\ufb01nite-dimensional manifold, and\nits elements are the individual curves of interest. Several interesting applications can be\naddressed in this formulation, including: 1) Ef\ufb01cient deformations between any two curves\nare generated by geodesic paths connecting the elements they represent in the shape space.\nGeodesic lengths also provide a natural metric for shape comparisons. 2) Given a set of\ncurves (or shapes), one can de\ufb01ne the concepts of mean and covariance using geodesic\npaths, and thus develop statistical frameworks for studying shapes. Furthermore, one can\nde\ufb01ne probabilities on a shape space to perform curve (or shape) classi\ufb01cation via hypoth-\nesis testing. While these problems have been studied in the past with elegant solutions\npresented in the literature (examples include [9, 11, 7, 2, 5]), we demonstrate the strength\nof the proposed framework by addressing these problems using signi\ufb01cantly different ideas.\n\nGiven past achievements in PDE-based approaches to curve evolution, what is the need\nfor newer frameworks? The study of the structure of the shape space provides new insights\nand solutions to problems involving dynamic contours and problems in quantitative shape\nanalysis. Once the constraints are imposed in de\ufb01nitions of shape spaces, the resulting\nsolutions automatically satisfy these constraints. It also complements existing methods of\nimage processing and analysis well by realizing new computational ef\ufb01ciencies. The main\nstrength of this approach is its exploitation of the differential geometry of the shape space.\nFor instance, a geodesic or gradient \ufb02ow Xt of an energy function E can be generated as a\nsolution of an ordinary differential equation of the type\n\ndXt\ndt\n\n= (cid:5)(rE(Xt)) ;\n\n(1)\n\nwhere (cid:5) denotes an appropriate projection onto a tangent space. This contrasts with the\nnonlinear PDE-based curve evolutions of past works. The geometry of shape space also\nenables us to derive statistical elements: probability measures, means and covariances;\nthese quantities have rarely been treated in previous studies. In shape extraction, the main\nfocus in past works has been on solving PDEs driven by image features under smoothness\nconstraints, and not on the statistical analysis of shapes of curves. The use of geodesic\npaths, or piecewise geodesic paths, has also seen limited use in the past.\n\nWe should also point out the main limitations of the proposed framework. One drawback\nis that curve evolutions can not handle certain changes in topology, which is one of the\nkey features of level-set methods; a shape space is purposely setup to not allow curves to\nbranch into several components. Secondly, this idea does not extend easily to the analysis\nof surfaces in R3. Despite these limitations, the proposed methodology provides powerful\nalgorithms for the analysis of planar curves as demonstrated by the examples presented\nlater. Moreover, even in applications where branching appears to be essential, the proposed\nmethods may be applicable with additional developments.\n\nThis paper is laid out as follows: Section 2 studies geometric representations of constrained\ncurves as elements of a shape space. Geometric analysis tools on the shape space are\npresented in Section 3. Section 4 provides examples of statistical analysis on the shape\nspace, while Section 5 concludes the paper with a brief summary.\n\n\f2 Representations of Shapes\n\nIn this paper we restrict the discussion to curves in R2 although curves in R3 can be handled\nsimilarly. Let (cid:11) : R 7! R2 denote the coordinate function of a curve parametrized by arc-\nlength, i.e., satisfying k _(cid:11)(s)k = 1, for every s. A direction function (cid:18)(s) is a function\nsatisfying _(cid:11)(s) = ej (cid:18)(s), where j = p(cid:0)1. (cid:18) captures the angle made by the velocity\nvector with the x-axis, and is de\ufb01ned up to the addition of integer multiples of 2(cid:25). The\ncurvature function (cid:20)(s) = _(cid:18)(s) can also be used to represent a curve.\nConsider the problem of studying shapes of contours or silhouettes of imaged objects as\nclosed, planar curves in R2, parametrized by arc length. Since shapes are invariant to rigid\nmotions (rotations and translations) and uniform scaling, a shape representation should be\ninsensitive to these transformations. Scaling can be resolved by \ufb01xing the length of (cid:11)\nto be 2(cid:25), and translations by representing curves via their direction functions. Thus, we\nconsider the space L2 of all square integrable functions (cid:18): [0; 2(cid:25)] ! R, with the usual inner\nproduct hf; gi = R 2(cid:25)\n0 f (s)g(s) ds. To account for rotations and ambiguities on the choice\nof (cid:18), we restrict direction functions to those having a \ufb01xed average, say, (cid:25). For (cid:11) to be\nclosed, it must satisfy the closure condition R 2(cid:25)\n0 ej(cid:18)(s) ds = 0. Thus, we represent curves\nby direction functions satisfying the average-(cid:25) and closure conditions; we call this space of\ndirection functions D. Summarizing, D is the subspace of L2 consisting of all (direction)\nfunctions satisfying the constraints\n\n1\n\n2(cid:25) Z 2(cid:25)\n\n0\n\n(cid:18)(s) ds = (cid:25) ; Z 2(cid:25)\n\n0\n\ncos((cid:18)(s)) ds = 0 ; Z 2(cid:25)\n\n0\n\nsin((cid:18)(s)) ds = 0 :\n\n(2)\n\nIt is still possible to have multiple continuous functions in D representing the same shape.\nThis variability is due to the choice of the reference point (s = 0) along the curve. For\nx 2 S1 and (cid:18) 2 D, de\ufb01ne (x (cid:1) (cid:18)) as a curve whose initial point (s = 0) is changed by a\ndistance of x along the curve. We term this a re-parametrization of the curve. To remove\nthe variability due to this re-parametrization group, de\ufb01ne the quotient space C (cid:17) D=S1 as\nthe space of continuous, planar shapes. For details, please refer to the paper [4].\n\n3 Geometric Tools for Shape Analysis\n\nThe main idea in the proposed framework is to use the geometric structure of a shape space\nto solve optimization and statistical inference problems on these spaces. This approach of-\nten leads to simple formulations of these problems and to more ef\ufb01cient vision algorithms.\nThus, we must study issues related to the differential geometry and topology of a shape\nspace. In this paper we restrict to the tangent and normal bundles, exponential maps, and\ntheir inverses on these spaces.\n\n3.1 Tangents and Normals to Shape Space\n\nThe main reason for studying the tangential and normal structures is the following: We\nwish to employ iterative numerical methods in the simulation of geodesic and gradient\n\ufb02ows on the shape space. At each step in the iteration, we \ufb01rst \ufb02ow in the linear space L2\nusing standard methods, and then project the new point back onto the shape space using\nour knowledge of the normal structure.\nFor technical reasons, it is convenient to reduce optimization and inference problems on C\nto problems on the manifold D, so we study the latter. It is dif\ufb01cult to specify the tangent\nspaces to D directly, because they are in\ufb01nite-dimensional. When working with \ufb01nitely\nmany constraints, as is the case here, it is easier to describe the space of normals to D in\n\n\fL2 instead. It can be shown that a vector f 2 L2 is tangent to D at (cid:18) if and only if f\nis orthogonal to the subspace spanned by f1; sin (cid:18); cos (cid:18)g. Hence, these three functions\nspan the normal space to D at (cid:18).\nImplicitly, the tangent space is given as: T(cid:18)(D) =\nff 2 L2jf ? spanf1; cos (cid:18); sin (cid:18)gg : Thus, the projection (cid:5) in Eqn. 1 can be speci\ufb01ed\nby subtracting from a function (in L2) its projection onto the space spanned by these three\nelements.\n\n3.2 Exponential Maps\n\nWe \ufb01rst describe the computation of geodesics (or, one-parameter \ufb02ows) in D with pre-\nscribed initial conditions. Geodesics on D are realized as exponential maps from tangent\nspaces to D. The intricate geometry of D disallows explicit analytic expressions. There-\nfore, we adopt an iterative strategy, where in each step, we \ufb01rst \ufb02ow in\ufb01nitesimally in the\nprescribed tangent direction in the space L2, and then project the end point of the path to D.\nNext, we parallel transport the velocity vector to the new point by projecting the previous\nvelocity orthogonally onto the tangent space of D at the new point. Again, this is done\nby subtracting normal components. The simplest implementation is to use Euler\u2019s method\nin L2, i.e., to move in each step along short straight line segments in L2 in the prescribed\ndirection, and then project the path back onto D. Details of this numerical construction of\ngeodesics are provided in [4].\nA geodesic can be speci\ufb01ed by an initial condition (cid:18) 2 D and a direction f 2 T(cid:18)(D),\nthe space of all tangent directions at (cid:18). We will denote the corresponding geodesic by\n(cid:9)((cid:18); t; f ), where t is the time parameter. The technique just described allows us to compute\n(cid:9) numerically. For t = 1, (cid:9)((cid:18); 1; f ) is the exponential map from f 2 T(cid:18)D to D.\n3.3 Shape Logarithms\n\nNext, we focus on the problem of \ufb01nding a geodesic path between any two given shapes\n(cid:18)1, (cid:18)2 2 D. This is akin to inverting the exponential map. The main issue is to \ufb01nd that\nappropriate direction f 2 T(cid:18)1(D) such that a geodesic from (cid:18)1 in that direction passes\nthrough (cid:18)2 at time t = 1. In other words, the problem is to solve for an f 2 T(cid:18)1(D) such\nthat (cid:9)((cid:18)1; 0; f ) = (cid:18)1 and (cid:9)((cid:18)1; 1; f ) = (cid:18)2. One can treat the search for this direction as\nan optimization problem over the tangent space T(cid:18)1(D). The cost to be minimized is given\nby the functional H[f ] = k(cid:9)((cid:18)1; 1; f ) (cid:0) (cid:18)2k2, and we are looking for that f 2 T(cid:18)1(C)\nfor which: (i) H[f ] is zero, and (ii) kfk is minimum among all such tangents. Since the\nspace T(cid:18)1(D) is in\ufb01nite dimensional, this optimization is not straightforward. However,\nsince f 2 L2, it has a Fourier decomposition, and we can solve the optimization problem\nover a \ufb01nite number of Fourier coef\ufb01cients. For any two shapes (cid:18)1; (cid:18)2 2 D, we have used\na shooting method to \ufb01nd the optimal f [4]. The basic idea is to choose an initial direction\nf speci\ufb01ed by its Fourier coef\ufb01cients and then use a gradient search to minimize H as a\nfunction of the Fourier coef\ufb01cients.\nFinally, to \ufb01nd the shortest path between two shapes in C, we compute the shortest geodesic\nconnecting representatives of the given shapes in D. This is a simple numerical problem,\nbecause C is the quotient of D by the 1-dimensional re-parametrization group S1. Shown\nin Figure 1 are three examples of geodesic paths in C connecting given shapes. Drawn in\nbetween are shapes corresponding to equally spaced points along the geodesic paths.\n\n4 Statistical Analysis on Shape Spaces\n\nOur goal is to develop tools for statistical analysis of shapes. Towards that goal, we develop\nthe following ideas.\n\n\f50\n\n100\n\n150\n\n200\n\n250\n\n300\n\n350\n\n50\n\n100\n\n150\n\n200\n\n250\n\n300\n\n350\n\n50\n\n100\n\n150\n\n200\n\n250\n\n300\n\n350\n\n400\n\n450\n\n500\n\n50\n\n100\n\n150\n\n200\n\n250\n\n300\n\n350\n\n50\n\n100\n\n150\n\n200\n\n250\n\n300\n\n350\n\n400\n\n450\n\n500\n\n550\n\n50\n\n100\n\n150\n\n200\n\n250\n\n300\n\n350\n\n400\n\n450\n\n500\n\n550\n\n50\n\n100\n\n150\n\n200\n\n250\n\n300\n\n350\n\n400\n\n450\n\n500\n\n550\n\n50\n\n100\n\n150\n\n200\n\n250\n\n300\n\n350\n\n400\n\n450\n\n500\n\n550\n\n0\n\n2\n\n4\n\n6\n\n8\n\n10\n\n12\n\n14\n\n16\n\n18\n\n20\n\n0\n\n2\n\n4\n\n6\n\n8\n\n10\n\n12\n\n14\n\n16\n\n18\n\n20\n\n2.5\n2\n1.5\n1\n0.5\n\n2\n\n1\n\n0\n\n2\n\n1\n\n0\n\n0\n\n2\n\n4\n\n6\n\n8\n\n10\n\n12\n\n14\n\n16\n\n18\n\n20\n\nFigure 1: Top panels show examples of shapes manually extracted from the images. Bottom\npanels show examples of evolving one shape into another via a geodesic path. In each case,\nthe leftmost shape is (cid:18)1, rightmost curves are (cid:18)2, and intermediate shapes are equi-spaced\npoints along the geodesic.\n\n4.1 Sample Means on Shape Spaces\n\nAlgorithms for \ufb01nding geodesic paths on the shape space allow us to compute means and\ncovariances in these spaces. We adopt a notion of mean known as the intrinsic mean or\nthe Karcher mean ([3]) that is quite natural in our geometric framework. Let d( ; ) be\nthe shortest-path metric on C. To calculate the Karcher mean of shapes f(cid:18)1; : : : ; (cid:18)ng in C,\nde\ufb01ne a function V : C ! R by V ((cid:18)) = Pn\ni=1 d((cid:18); (cid:18)i)2. Then, de\ufb01ne the Karcher mean\nof the given shapes to be any point (cid:22) 2 C for which V ((cid:22)) is a local minimum. In the case\nof Euclidean spaces this de\ufb01nition agrees with the usual de\ufb01nition (cid:22) = 1\ni=1 pi. Since\nC is complete, the intrinsic mean as de\ufb01ned above always exists. However, there may be\ncollections of shapes for which (cid:22) is not unique. An iterative algorithm for \ufb01nding a Karcher\nmean of given shapes is given in [4] and see [4] for details.\n\nn Pn\n\n4.2 Shape Learning\n\nAnother important problem in statistical analysis of shapes is to \u201clearn\u201d probability models\nfrom the observed shapes. Once the shapes are clustered, we assume that elements in\nthe same cluster are (random) samples from the same probability model, and try to learn\nthis model. These models can then be used for future Bayesian discoveries of shapes or\nfor classi\ufb01cation of new shapes. To learn a probability model amounts to estimating a\nprobability density function on the shape space, a task that is rather dif\ufb01cult to perform\nprecisely. The two main dif\ufb01culties are: nonlinearity and in\ufb01nite-dimensionality of C, and\nthey are handled here as follows.\n\n\f1. Tangent Space: Since C is a nonlinear manifold, we impose a probability density\non a tangent space rather than on C directly. For a mean shape (cid:22) 2 C, the space of\nall tangents to the shape space at (cid:22), T(cid:22)(C) (cid:26) L2, is an in\ufb01nite-dimensional vector\nspace. Similar to the ideas presented in [1], we impose a probability density\nfunction f on T(cid:22)(C) in order to avoid dealing with the nonlinearity of C. The\nbasic assumption here is that the support of f in T(cid:22)(C) is suf\ufb01ciently small so that\nthe exponential map between the support and C has a well-de\ufb01ned inverse.\n2. Finite-Dimensional Representation: Assume that the covariance operator of the\nprobability distribution on T(cid:22)(C) has \ufb01nite spectrum, and thus admits a \ufb01nite rep-\nresentation. We approximate a tangent function by a truncated Fourier series to\nobtain a \ufb01nite-dimensional representation. We thus characterize a probability dis-\ntribution on T(cid:22)(C) as that on a \ufb01nite-dimensional vector space.\n\nLet a tangent element g 2 T(cid:22)(C) be represented by its Fourier expansion: g(s) =\nPm\ni=1 xiei(s), for a large positive integer m. Using the identi\ufb01cation g (cid:17) x = fxig 2 Rm,\none can de\ufb01ne a probability distribution on elements of T(cid:22)(C) via a probability distribution\non the coef\ufb01cients x.\n\nWe still have to decide what form does the resulting probability distribution takes. One\ncommon approach is to assume a parametric form so that learning is reduced to an esti-\nmation of the relevant parameters. As an example, a popular idea is to assume a Gaussian\ndistribution on the underlying space. The variations of x as mostly restricted to an m1-\ndimensional subspace of Rm, called the principal subspace, for some m1 (cid:20) m. On this\nsubspace we adopt a multivariate normal with mean (cid:22) 2 Rm1 and variance K 2 Rm1(cid:2)m1.\nEstimation of (cid:22) and K from the observed shapes follows the usual procedures. Computa-\ntion of the mean shape (cid:22) is described in [4]. Using (cid:22) and any observed shapes (cid:18)j, we \ufb01nd\nthe tangent vectors gj 2 T(cid:22)(C) such that the geodesic from (cid:22) in the direction gj passes\nthrough (cid:18)j in unit time. This tangent vector is actually computed via its \ufb01nite-dimensional\nrepresentation and results in the corresponding vector of coef\ufb01cients xj. From the observed\nvalues of xj 2 Rm, one can estimate the principal subspace and the covariance matrix. Ex-\ntracting the dominant eigenvectors of the estimated covariance matrix, one can capture the\ndominant modes of variations. The density function associated with this family of shapes\nis given by:\n\nh((cid:18); (cid:22); K) (cid:17)\n\n1\n\n(2(cid:25))m=2 det(K)1=2 exp((cid:0)(x (cid:0) (cid:22))T K (cid:0)1(x (cid:0) (cid:22))=2) ;\n\n(3)\n\nwhere (cid:9)((cid:22); g; 1) = (cid:18) and g = Pm1\n\ni=1 (xi (cid:0) (cid:22)i)ei(s).\n\nAn example of this shape learning is shown in Figure 2. The top panels show infrared\npictures of tanks, followed by their extracted contours in the second row of images. These\ncontours are then used in analyzing shapes of tanks. As an example, the 12 panels in\nbottom left show the observed contours of a tank when viewed from a variety of angles,\nand we are interested in capturing this shape variation. Repeating earlier process, the mean\nshape is shown in the top middle panel and the eigen values are plotted in the bottom middle\npanel. Twelve panels on the right show shape generated randomly from a parametric model\nh((cid:18); (cid:22); (cid:6)).\nIn Figure 3 we present an interesting example of samples from three different shape models.\nLet the original model be h((cid:18); (cid:22); K) where (cid:22) and K are as shown in Figure 2. Six samples\nfrom this model are shown in the left of Figure 3. The middle shows samples from a\nprobability density h((cid:18); (cid:22); 0:2K) to demonstrate a smaller covariance; the samples here\nseem much closer to the mean shape. The right shows samples from a density where the\ncovariance is equivariant in principal subspace, i.e.\nthe covariance is given by 0:4kKk2\ntimes a matrix whose top left is a 12 (cid:2) 12 identity matrix and remaining entries are zero.\n\n\f++\n\n++\n\n3.5\n\n3\n\n2.5\n\n2\n\n1.5\n\n1\n\n0.5\n0\n\n2\n\n4\n\n6\n\n8\n\n10\n\n12\n\nFigure 2: Top two rows: Infrared images and extracted contours of two tanks M60 and T72\nat different viewing angles. Bottom row: For the 12 observed M60 shapes shown in left,\nthe middle panels show the mean shape and the principal eigenvalues of covariance, and\nthe right panels show 12 random samples from Gaussian model h((cid:18); (cid:22); K).\n\nFigure 3: Comparison of samples from three families: (i) h((cid:18); (cid:22); K), (ii) h((cid:18); (cid:22); 0:2K),\nand (iii) h((cid:18); (cid:22); 0:4kKk2I12).\n\n4.3 Hypothesis Testing\n\nThis framework of shape representations and statistical models on shape spaces has im-\nportant applications in decision theory. One is to recognize an imaged object according\nto the shape of its boundary. Statistical analysis on shape spaces can be used to make a\nvariety of decisions such as: Does this shape belong to a given family of shapes? Does\nthese two families of shapes have similar means and variances? Given a test shape and two\ncompeting probability models, which one explains the test shape better?\n\nWe restrict to the case of binary hypothesis testing since for multiple hypotheses, one can\n\ufb01nd the best hypothesis using a sequence of binary hypothesis tests. Consider two shape\nfamilies speci\ufb01ed by their probability models: h1 and h2. For an observed shape (cid:18) 2 C, we\nare interested in selecting one of two following hypotheses: H0 : (cid:18) (cid:24) h1 or H1 : (cid:18) (cid:24) h2 :\nWe will select a hypothesis according to the likelihood ratio test: l((cid:18)) (cid:17) log( h1((cid:18))\nh2((cid:18)) ) >< 0 :\nSubstituting for the normal distributions (Eqn. 3) for h1 (cid:17) h((cid:18); (cid:22)1; (cid:6)1) and h2 (cid:17)\nh((cid:18); (cid:22)2; (cid:6)2), we can obtain suf\ufb01cient statistics for this test. Let x1 be the vector of Fourier\ncoef\ufb01cients that encode the tangent direction from (cid:22)1 to (cid:18), and x2 be the same for direction\ni=1 x2;iei, then we\n\nfrom (cid:22)2 to (cid:18). In other words, if we let g1 = Pm\n\nhave (cid:18) = (cid:9)((cid:22)1; g1; 1) = (cid:9)((cid:22)2; g2; 1). It follows that\n\ni=1 x1;iei and g2 = Pm\n(log(det((cid:6)2)) (cid:0) log(det((cid:6)1)))\n\n(4)\n\nl((cid:18)) = (x\n\nT\n1 (cid:6)(cid:0)\n\n1 x1 (cid:0) x2(cid:6)(cid:0)\n\n1\n2\nIn case the two covariances are equal to (cid:6),\nthe hypothesis test reduces to l((cid:18)) =\nx2) >< 0 ; and when (cid:6) is identity, the log-likelihood ratio is given\n(xT\nby l((cid:18)) = kx1k2 (cid:0) kx2k2. The curved nature of the shape space C makes the analysis\n\nx1 (cid:0) x2(cid:6)(cid:0)\n\n2 x2) (cid:0)\n\n1 (cid:6)(cid:0)\n\n\fof this test dif\ufb01cult. For instance, one may be interested in probability of type one error\nbut that calculation requires a probability model on x2 when H0 is true. As a \ufb01rst order\napproximation, one can write x2 (cid:24) N ((cid:22)x; (cid:6)1), where (cid:22)x is the coef\ufb01cient vector of tangent\ndirection in T(cid:22)2 (C) that corresponds to the geodesic from (cid:22)2 to (cid:22)1. However, the validity\nof this approximation remains to be tested under experimental conditions.\n\n5 Conclusion\n\nWe have presented an overview of an ambitious framework for solving optimization and\ninference problems on a shape space. The main idea is to exploit the differential geometry\nof the manifold to obtain simpler solutions as compared to those obtained with PDE-based\nmethods. We have presented some applications of this framework in image understanding.\nIn particular, these ideas lead to a novel statistical theory of shapes of planar objects with\npowerful tools for shape analysis.\n\nAcknowledgments\n\nThis research was supported in part by grants NSF (FRG) DMS-0101429, NMA 201-01-\n2010, and NSF (ACT) DMS-0345242.\n\nReferences\n\n[1] I. L. Dryden and K. V. Mardia. Statistical Shape Analysis. John Wiley & Son, 1998.\n[2] N. Duta, M. Sonka, and A. K. Jain. Learning shape models from examples using\nautomatic shape clustering and Procrustes analysis. In Proceedings of Information\nin Medical Image Processing, volume 1613 of Lecture Notes in Computer Science,\npages 370\u2013375. Springer, 1999.\n\n[3] H. Karcher. Riemann center of mass and molli\ufb01er smoothing. Communications on\n\nPure and Applied Mathematics, 30:509\u2013541, 1977.\n\n[4] E. Klassen, A. Srivastava, W. Mio, and S. Joshi. Analysis of planar shapes using\ngeodesic paths on shape spaces. IEEE Pattern Analysis and Machiner Intelligence,\n26(3):to appear, March, 2004.\n\n[5] H. Le. Locating frechet means with application to shape spaces. Advances in Applied\n\nProbability, 33(2):324\u2013338, 2001.\n\n[6] W. Mio, A. Srivastava, and E. Klassen. Interpolation by elastica in Euclidean spaces.\n\nQuarterly of Applied Mathematics, to appear, 2003.\n\n[7] D. Mumford. Elastica and computer vision, pages 491\u2013506. Springer, New York,\n\n1994.\n\n[8] Editor: B. Romeny. Geometry Driven Diffusions in Computer Vision. Kluwer, 1994.\n[9] T. B. Sebastian, P. N. Klein, and B. B. Kimia. On aligning curves. IEEE Transactions\n\non Pattern Analysis and Machine Intelligence, 25(1):116\u2013125, 2003.\n\n[10] J. Sethian. Level Set Methods: Evolving Interfaces in Geometry, Fluid Mechanics,\n\nComputer Vision, and Material Science. Cambridge University Press, 1996.\n\n[11] E. Sharon, A. Brandt, and R. Basri. Completion energies and scale. IEEE Transac-\n\ntions on Pattern Analysis and Machine Intelligence, 22(10):1117\u20131131, 2000.\n\n[12] L. Younes. Optimal matching between shapes via elastic deformations. Journal of\n\nImage and Vision Computing, 17(5/6):381\u2013389, 1999.\n\n\f", "award": [], "sourceid": 2515, "authors": [{"given_name": "Anuj", "family_name": "Srivastava", "institution": null}, {"given_name": "Washington", "family_name": "Mio", "institution": null}, {"given_name": "Xiuwen", "family_name": "Liu", "institution": null}, {"given_name": "Eric", "family_name": "Klassen", "institution": null}]}