{"title": "Implicit Surfaces with Globally Regularised and Compactly Supported Basis Functions", "book": "Advances in Neural Information Processing Systems", "page_first": 273, "page_last": 280, "abstract": null, "full_text": "Implicit Surfaces with Globally Regularised and Compactly Supported Basis Functions\n\n\n\n Christian Walder , Bernhard Scholkopf & Olivier Chapelle Max Planck Institute for Biological Cybernetics, 72076 Tubingen, Germany T he University of Queensland, Brisbane, Queensland 4072, Australia first.last@tuebingen.mpg.de\n\nAbstract\nWe consider the problem of constructing a function whose zero set is to represent a surface, given sample points with surface normal vectors. The contributions include a novel means of regularising multi-scale compactly supported basis functions that leads to the desirable properties previously only associated with fully supported bases, and show equivalence to a Gaussian process with modified covariance function. We also provide a regularisation framework for simpler and more direct treatment of surface normals, along with a corresponding generalisation of the representer theorem. We demonstrate the techniques on 3D problems of up to 14 million data points, as well as 4D time series data.\n\n1\n\nIntroduction\n\nThe problem of reconstructing a surface from a set of points frequently arises in computer graphics. Numerous methods of sampling physical surfaces are now available, including laser scanners, optical triangulation systems and mechanical probing methods. Inferring a surface from millions of points sampled with noise is a non-trivial task however, for which a variety of methods have been proposed. The class of implicit or level set surface representations is a rather large one, however other methods have also been suggested for a review see [1]. The implicit surface methods closest to the present work are those that construct the implicit using regularised function approximation [2], such as the \"Variational Implicits\" of Turk and O'Brien [3], which produce excellent results, but at a cubic computational fitting cost in the number of points. The effectiveness of this type of approach is undisputed however, and has led researchers to look for ways to overcome the computational problems. Two main options have emerged. The first approach uses compactly supported kernel functions (we define and discuss kernel functions in Section 2), leading to fast algorithms that are easy to implement [4]. Unfortunately however these methods are suitable for benign data sets only. As noted in [5], compactly supported basis functions \"yield surfaces with many undesirable artifacts in addition to the lack of extrapolation across holes\". A similar conclusion was reached in [6] which states that local processing methods are \"more sensitive to the quality of input data [than] approximation and interpolation techniques based on globally-supported radial basis functions\" a conclusion corroborated by the results within a different paper from the same group [7]. The second means of overcoming the aforementioned computational problem does not suffer from these problems however, as demonstrated by the FastRBFTM algorithm [5], which uses the the Fast Multipole Method (FMM) [8] to overcome the computational problems of non-compactly supported kernels. The resulting method is non-trivial to implement however and to date exists only in the proprietary FastRBFTM package. We believe that by applying them in a different manner, compactly supported basis functions can lead to high quality results, and the present work is an attempt to bring the reader to the same conclusion. In Section 3 we introduce a new technique for regularising such basis functions which\n\n\f\nFigure 1: (a) Rendered implicit surface model of \"Lucy\", constructed from 14 million points with normals. (b) A planar slice that cuts the nose the colour represents the value of the embedding function and the black line its zero level. (c) A black dot at each of the 364,982 compactly supported basis function centres which, along with the corresponding dilations and magnitudes, define the implicit.\n(a) (b) (c)\n\nallows high quality, highly scalable algorithms that are relatively easy to implement. We also show that the approximation can be interpreted as a Gaussian process with modified covariance function. Before doing so however, we present in Section 2 the other main contribution of the present work, which is to show how surface normal vectors can be incorporated directly into the regularised regression framework that is typically used for fitting implicit surfaces, thereby avoiding the problematic approach of constructing \"off-surface\" points for the regression problem. To demonstrate the effectiveness of the method we apply it to various problems in Section 4 before summarising in the final Section 5.\n\n2\n\nImplicit Surface Fitting by Regularised Regression\n\nHere we discuss the use of regularised regression [2] for the problem of implicit surface fitting. In Section 2.1 we motivate and introduce a clean and direct means of making use of normal vectors. The following Section 2.2 extends on the ideas in Section 2.1 by formally generalising the important representer theorem. The final Section 2.3 discusses the choice of regulariser (and associated kernel function), as well as the associated computational problems that we overcome in Section 3. 2.1 Regression Based Approaches and the Use of Normal Vectors Typically implicit surface has been done by solving a regularised regression problem [5, 4] arg min f 2 + C H\nf\n\nim\n=1\n\n(f (xi ) - yi ) ,\n\n2\n\n(1)\n\nwhere the yi are some estimate of the signed distance function at the xi , and f is the embedding function which takes on the value zero on the implicit surface. The norm f H is a regulariser which takes on larger values for less \"smooth\" functions. We take H to be a reproducing kernel Hilbert space (RKHS) with representer of evaluation (kernel function) k (, ), so that we have the reproducing property, f (x) = f, k (x, ) H. The solution to this problem has the form f (x) =\nm i\n\ni k (xi , x) .\n\n(2)\n\nNote as a technical aside that the thin-plate kernel case which we will adopt requires a somewhat more technical interpretatiosn, as it is only conditionally positive definite. We discuss the positive definite case for clarity only, as it is simpler and yet sufficient to demonstrate the ideas involved. Choosing the (xi , yi ) pairs for (2) is itself a non-trivial problem, and heuristics are typically used to prevent contradictory target values (see e.g. [5]). We now propose more direct method, novel in the context of implicit fitting, which avoids these problems. The approach is suggested by the fact that the normal direction of the implicit surface is given by the gradient of the embedding function\n\n\f\n thus normal vectors can be incorporated by regression with gradient targets. The function that we seek is the minimiser of: im im 2 f 2 + C1 (f (xi )) + C2 ( f ) (xi ) - ni 2 d , (3) H R\n=1 =1\n\nwhich uses the given surface point/normal pairs (xi , ni ) directly. By imposing stationarity and using the reproducing property we can solve for the optimal f . A detailed derivation of this procedure is given in [1]. Here we provide only the result, which is that we have to solve for m coefficients i as well as a further md coefficients lj to obtain the optimal solution f (x) =\nm i\n\ni k (xi , x) +\n\nm i\n\nd l\n\nli kl (xi , x) ,\n\n(4)\n\n. where we define kl (xi , x) = [( k) (xi , x)]l , the partial derivative of k in the l-th component of its first argument.1 The coefficients and l of the solution are found by solving the system given by l 0 = (K + I /C1 ) + Kl l (5) Nm = Km + (Kmm + I /C2 )k + l\n=m\n\nKlm l , m = 1 . . . d\n\n(6)\n\nwhere, writing klm for the second derivatives of k (, ) (defined similarly to the first), we've defined [Nl ]i = [ni ]l ; [l ]i = li ; [Kl ]i,j = kl (xi , xj ) ; []i = i [K ]i,j = k (xi , xj ) [Klm ]i,j = klm (xi , xj ).\n\nIn summary, minimum norm approximation in an RKHS with gradient target values is optimally solved by a function in the span of the kernels and derivatives thereof as per Equation 4 (cf. Equation 2), and the coefficients of the solution are given by Equations (5) and (6). It turns out, however, that we can make a more general statement, which we do briefly in the next sub-Section. 2.2 The Representer Theorem with Linear Operators The representer theorem, much celebrated in the Machine Learning community, says that the function minimising an RKHS norm along with some penalties associated with the function value at various points (as in Equation 1 for example) is a sum of kernel functions at those points (as in Equation 2). As we saw in the previous section however, if gradients also appear in the risk function to be minimised, then gradients of the kernel function appear in the optimal solution. We now make a more general statement the case in the previous section corresponds to the following if we choose the linear operators Li (which we define shortly) as either identities or partial derivatives. The theorem is a generalisation of [9] (using the same proof idea) with equivalence if we choose all Li to be identity operators. The case of general linear operators was in fact dealt with already in [2] (which merely states the earlier result in [10]) but only for the case of a specific loss function c. The following theorem therefore combines the two frameworks: Theorem 1 Denote by X a non-empty set, by k a reproducing kernel with reproducing kernel Hilbert space H, by a strictly monotonic increasing real-valued function on [0, ), by c : Rm R {} an arbitrary cost function, and by L1 , . . . Lm a set of linear operators H H. Each minimiser f H of the regularised risk functional c((L1 f )(x1 ), . . . (Lm f )(xm )) + (||f ||2 ) H admits the form f= where kx\n1\n\n(7) (8)\n\nim\n=1\n\ni L kxi , i\n\nk(, x) and L denotes the adjoint of Li . i\n\nSquare brackets with subscripts indicate matrix elements: [a]i is the i-th element of the vector a.\n\n\f\nm Proof. Decompose f into f = i=1 i L kxi + f , with i R and f , L kxi H = 0, for each i i i = 1 . . . m. Due to the reproducing property we can write, for j = 1 . . . m, (Lj f )(xj ) = = = (Lj f ), k (, xj ) H im\n=1\n\ni Lj L kxi , k (, xj ) H + (Lj f ), k (, xj ) H i i Lj L kxi , k (, xj ) H. i\n\nim\n=1\n\nThus, the first term in Equation 7 is independent of f . Moreover, it is clear due to orthogonality that if f = 0 then m m 2 2 i i i L kxi + f > i L kxi , i i\n=1 H =1 H\n\nso that for any fixed i R, Equation 7 is minimised when f = 0. 2.3 Thin Plate Regulariser and Associated Kernel As is well known (see e.g. [2]), the choice of regulariser (the function norm in Equation 3) leads to a particular kernel function k (, ) to be used in Equation 4. For geometrical problems, an excellent regulariser is the thin-plate energy, which for arbitrary order m and dimension d is given by [2]:\nf2 H = = f, f L2\nd X i1 =1\n\n(9)\n\n\n\n\nd XZ im =1\n\nZ \n\n\n\n,,\n\nx1 =-\n\nxd =-\n\n f xi1 xim\n\n,,\n\n f xi1 xim\n\n dx1 . . . dxd , (10)\n\nwhere is a regularisation operator taking all partial derivatives of order m, which corresponds to a \"radial\" kernel function of the form k (x, y ) = t(||x - y ||), where [11] r2m-d ln(r) if 2m > d and d is even, t(r) = r2m-d otherwise. There are a number of good reasons to use this regulariser rather than those leading to compactly supported kernels, as we touched on in the introduction. The main problem with compactly supported kernels is that the corresponding regularisers are somewhat poor for geometrical problems they always draw the function towards some nominal constant as one moves away from the data, thereby implementing the non-intuitive behaviour of regularising the constant function and making interpolation impossible for further discussion see [1] as well as [5, 6, 7]. The scheme we propose in Section 3 solves these problems, previously associated with compactly supported basis functions, by defining and computing the regulariser separately from the function basis.\n\n3\n\nA Fast Scheme using Compactly Supported Basis Functions\n\nHere we present a fast approximate scheme for solving the problem of the previous Section, in which we restrict the class of functions to the span of a compactly supported, multi-scale basis, as described in Section 3.1, and minimise the thin-plate regulariser within this span as per Section 3.2. 3.1 Restricting the Set of Available Functions Computationally, using the thin-plate spline leads to the problem that the linear system we need to solve (Equations 5 and 6), which is of size m(d + 1), is dense in the sense of having almost all non-zero entries. Since solving such a system navely has a cubic time complexity in m, we propose i forcing f () to take the form: kp f () = k fk (), (11)\n=1\n\n\f\nwhere the individual basis functions are fk () = (||vk -||/sk ) for some function : R+ R with support [0, 1). The vk and sk are the basis function centres and dilations (or scales), respectively. For we choose the B3 -spline function: n ( n 4 (-1)n d+1 (r) = r+( - n))d , (12) + d! d+1 2 =0 although this choice is rather inconsequential since, as we shall ensure, the regulariser is unrelated to the function basis any smooth compactly supported basis function could be used. In order to achieve the same interpolating properties as the thin-plate spline, we wish to minimise our regularised risk function given by Equation 3 within the span of Equation 11. The key to doing this is to note that as given before in Equation 9, the regulariser (function norm) can be written as f H = f, f L2 . Given this fact, a straightforward calculation leads to the following system 2 for the optimal k (in the sense of minimising Equation 3): K ld ld T T + C1 Kxv Kxv + C2 Kxvl Kxvl = C2 Kxvl Nl , (13) reg\n=1 =1\n\nwhere we have defined the following matrices: [Kreg ]k,k\n=\n\nfk , fk\n\nL 2\n\n; [Kxv ]i,k = fk (xi ); [Kxvl ]i,k = [( fk )(xi )]l ;\n\n[ ]k = k ; [Nl ]i = [ni ]l . The computational advantage is that the coefficients that we need are now given by a sparse pdimensional positive semi-definite linear system, which can be constructed efficiently by simple code that takes advantage of software libraries for fast nearest neighbour type searches (see e.g. [12]). The system can then be solved efficiently using conjugate gradient type methods. In [1] we describe how we construct a basis with p m that results in a highly sparse linear system, but that still contains good solutions. The critical matter of computing Kreg is dealt with next. 3.2 Computing the Regularisation Matrix We now come to the crucial point of calculating Kreg , which can be thought of as the regularisation matrix. The present Section is highly related to [13], however there numerical methods were resorted to for the calculation of Kreg presently we shall derive closed form solutions. Also worth comparing to the present Section is [14], where a prior over the expansion coefficients (here the ) is used to mimic a given regulariser within an arbitrary basis, achieving a similar result but without the computational advantages we are aiming for. As we have already noted we can write f 2 = f, f L2 [2], so that for the function given by Equation 11 we have: H\n,p ,2 ,X , , , j fj (), , , ,\nj =1\n\n* = \n\np X j =1\n\nj fj (), \n\np X k=1\n\n+ k fk ()\nL2\n\nH\n\n=\n\np X j,k=1\n\n. j k fj (), fk () L2 = T Kreg .\n\nTo build the sparse matrix Kreg , a fast range search library (e.g. [12]) can be used to identify the non-zero entries that is, all those [Kreg ]i,j for which i and j satisfy vi - vj si + sj . In order to evaluate fj (), fk () L2 , it is necessary to solve the integral of Equation 10, the full derivation of which we relegate to [1] here we just provide the main results. It turns out that since the fi are all dilations and translations of the same function ( ), then it is sufficient solve for the following . function of si , sj and d = vi - vj : (()si - d), (()sj ) L2 , which it turns out is given by (\n- F 1\n\n2 j ) |s1 s2 |\n\n2m\n\n ( )( ) s1 s2\n\n( d), (14)\n\n\f\nFigure 2: Various values of the regularisation parameters lead to various amounts of \"smoothing\" here we set C1 = C2 in Equation 3 to an increasing value from top-left to bottom-right of the figure.\n\nFigure 3: Ray traced three dimensional implicits, \"Happy Buddha\" (543K points with normals) and the \"Thai Statue\" (5 million points with normals).\n\nwhere j 2 = -1, = Fx [(x)], and by F (and F -1 ) we mean the Fourier (inverse Fourier) transform operators in the subscripted variable. Computing Fourier transforms in d dimensions difficult in general, but for radial functions g (x) = gr (||x||) it may be made easier by the fact that the Fourier transform in d dimensions (as well as its inverse) can be computed by the single integral: d d (2 ) 2 Fx [gr ( x )] ( ) = r 2 gr (r)J d-2 (|| ||r)dr, d-2 2 || || 2 0 where J (r)is the -th order Bessel function of the first kind. Unfortunately the integrals required to attain Equation 14 in closed form cannot be solved for general dimensionality d, regularisation operator and basis function form , however we did manage to solve them for arguably the most useful case: d = 3 with the m = 2 thin plate energy and the B3 spline basis function of Equation 12. The resulting expressions are rather unwieldy however, so we give only an implementation in the C language in the Appendix of [1], where we also show that for the cases that cannot be solved analytically the required integral can at worst always be transformed to a two dimensional integral for which one can use numerical methods. 3.3 Interpretation as a Gaussian Process Presently we use ideas from [15] to demonstrate that the approximation described in this Section 3 is equivalent to inference in an exact Gaussian Process with covariance function depending on the choice of function basis. Placing a multivariate Gaussian prior over the coefficients in (11), namely N (0, Kr-g1 ), we see that f obeys a zero mean Gaussian process prior writing [fx ]i = f (xi ) e and denoting expectations by E [] we have for the covariance T T E [fx fx ] = Kxz E [ T ] Kxz\nT = Kxz Kr-g1 Kxz e\n\nNow, assuming an iid Gaussian noise model with variance 2 and defining Kxt etc. similarly to Kxz we cf n immedaately write the joint distribution between the observation af a test point t, that is a i t , yt N (t), 2 nd the vector of observations at the xi , namely yx N x , 2 I which is K 0 K -1 -1 2 xz Kreg Kz x + I z K x-Kreg Kzt 2 . p(yx , yt ) = N , -1 1 Ktz Kreg Kzx tz Kreg Kz t + I , The posterior distribution is therefore itself Gaussian, p(yt |yx ) N yt |yx , yt |yx and we can employ a well known expression2 for the marginals of a multivariate Gaussian followed by the Matrix inversion lemma to derive an expression for the mean of the posterior, K TK -1 -1 -1 2 t|y = y xz Kreg Kz t xz Kreg Kz x + I 2 -1 T T = Ktz Kreg + Kxz Kxz Kxz y .\n2\n\n,,,,\n\nx y\n\n N\n\n,,,,\n\na b\n\n,, A , CT\n\nC B\n\n\n\n ` ` x|y N a + C B -1 (y - b), A - C B -1 C T\n\n\f\nName Bunny Face Armadillo Dragon Buddha Asian Dragon Thai Statue Lucy\n\n# Points 34834 75970 172974 437645 543197 3609455 4999996 14027872\n\n# Bases 9283 7593 45704 65288 105993 232197 530966 364982\n\nBasis 0.4 0.7 6.6 14.4 117.4 441.6 3742.0 1425.8\n\nKreg 2.4 1.9 8.5 16.3 27.4 60.9 197.5 170.5\n\nKxv , Kzv 3.7 7.0 37.0 70.9 99.4 608.3 1575.6 3484.1\n\nMultiply\n\n11.7 20.3 123.4 322.8 423.7 1885.0 3121.2 9367.7\n\nSolve 20.4 16.0 72.3 1381.4 2909.3 1009.5 2569.5 1340.5\n\nTotal 38.7 46.0 247.9 1805.7 3577.2 4005.2 11205.7 15788.5\n\nTable 1: Timing results with a 2.4GHz AMD Opteron 850 processor, for various 3D data sets. Column one is the number of points, each of which has an associated normal vector, and column two is the number of basis vectors (the p of Section 3.1). The remaining columns are all in units of seconds: column three is the time taken to construct the function basis, columns four and five are the times required to construct the indicated matrices, column six is the time required to multiply the matrices as per Equation 13, column seven is the time required to solve that same equation for and the final column is the total fitting time. By comparison with (11) and (13) (but with C1 = 1/ 2 , C2 = 0 and y = 0) we can see that the mean of the posterior distribution is identical to our approximate regularised solution based on compactly supported basis functions. For the corresponding posterior variance we have =K K -1 K -K -1 2 -1 -1 2 -1 yt |yx = xz Kreg Kz x + I xz Kreg Kz t tz Kreg Kz t + tz Kreg Kz x -1 T 2 Ktz 2 Kreg + Kxz Kxz Kz t + 2 .\n\n4\n\nExperiments\n\nWe fit models to 3D data sets of up to 14 million data points timings are given in Table 1, where we also see that good compression ratios are attained, in that relatively few basis functions represent the shapes. Also note that the fitting time scales rather well, from 38 seconds for the Stanford Bunny (35 thousand points with normals) to 4 hours 23 minutes for the Lucy statue (14 million points with normals 14 106 (1 value term + 3 gradient terms ) 56 million \"regression targets\"). Taking account of the different hardware the times seem to be similar to those of the FMM approach [5]. Some rendered examples are given in Figures 1 and 3, and the well-behaved nature of the implicit over the entire 3D volume of interest is shown for the Lucy data-set in the accompanying video. In practice the system is extremely robust and produces excellent results without any parameter adjustment smaller values of C1 and C2 in Equation 3 simply lead to the smoothing effect shown in Figure 2. The system also handles missing and noisy data gracefully, as demonstrated in [1]. Higher dimensional implicit surfaces are also possible, interesting being a 4D representation (3D + \"time\") of a moving 3D shape one use for this being the construction of animation sequences from a time series of 3D point cloud data in this case both spatial and temporal information can help to resolve noise or missing data problems within individual scans. We demonstrate this in the accompanying video, which shows that 4D surfaces yield superior 3D animation results in comparison to a sequence of 3D models. Also interesting are interpolations in 4D in the accompanying video we effectively interpolate between two three dimensional shapes.\n\n5\n\nSummary\n\nWe have presented ideas both theoretically and practically useful for the computer graphics and machine learning communities, demonstrating them within the framework of implicit surface fitting. Many authors have demonstrated fast but limited quality results that occur with compactly supported function bases. The present work differs by precisely minimising a well justified regulariser within the span of such a basis, achieving fast and high quality results. We also showed how normal vectors can be incorporated directly into the usual regression based implicit surface fitting framework, giving a generalisation of the representer theorem. We demonstrated the algorithm on 3D problems of up to 14 million data points and in the accompanying video we showed the advantage of constructing a 4D surface (3D + time) for 3D animation, rather than a sequence of 3D surfaces.\n\n\f\nFigure 4: Reconstruction of the Stanford bunny after adding Gaussian noise with standard deviation, from left to right, 0, 0.6, 1.5 and 3.6 percent of the radius of the smallest enclosing sphere the normal vectors were similarly corrupted assuming they had length equal to this radius. The parameters C1 and C2 were chosen automatically using five-fold cross validation.\n\nReferences\n [1] C. Walder, B. Scholkopf, and O. Chapelle. Implicit surface modelling with a globally regularised basis of compact support. Technical report, Max Planck Institute for Biological Cybernetics, Department of Empirical Inference, Tbingen, Germany, April 2006. [2] G. Wahba. Spline Models for Observational Data. Series in Applied Mathematics, Vol. 59, SIAM, Philadelphia, 1990. [3] Greg Turk and James F. O'Brien. Shape transformation using variational implicit functions. Proceedings of ACM SIGGRAPH 1999, pages 335342, August 1999. In\n\n[4] Bryan S. Morse, Terry S. Yoo, David T. Chen, Penny Rheingans, and K. R. Subramanian. Interpolating implicit surfaces from scattered surface data using compactly supported radial basis functions. In SMI '01: Proc. Intl. Conf. on Shape Modeling & Applications, Washington, 2001. IEEE Computer Society. [5] J. C. Carr, R. K. Beatson, J. B. Cherrie, T. J. Mitchell, W. R. Fright, B. C. McCallum, and T. R. Evans. Reconstruction and representation of 3d objects with radial basis functions. In ACM SIGGRAPH 2001, pages 6776. ACM Press, 2001. [6] Yutaka Ohtake, Alexander Belyaev, Marc Alexa, Greg Turk, and Hans-Peter Seidel. Multi-level partition of unity implicits. ACM Transactions on Graphics, 22(3):463470, July 2003. [7] Y. Ohtake, A. Belyaev, and Hans-Peter Seidel. A multi-scale approach to 3d scattered data interpolation with compactly supported basis functions. In Proc. Intl. Conf. Shape Modeling, Washington, 2003. IEEE Computer Society. [8] L. Greengard and V. Rokhlin. A fast algorithm for particle simulations. J. Comp. Phys., pages 280292, 1997. [9] Bernhard Scholkopf, Ralf Herbrich, and Alex J. Smola. A generalized representer theorem. In COLT '01/EuroCOLT '01: Proceedings of the 14th Annual Conference on Computational Learning Theory, pages 416426, London, UK, 2001. Springer-Verlag. [10] G. Kimeldorf and G. Wahba. Some results on Tchebycheffian spline functions. Journal of Mathematical Analysis and Applications, 33:8295, 1971. [11] J. Duchon. Splines minimizing rotation-invariant semi-norms in sobolev spaces. Constructive Theory of Functions of Several Variables, pages 85100, 1977. [12] C. Merkwirth, U. Parlitz, and W. Lauterborn. Fast nearest neighbor searching for nonlinear signal processing. Phys. Rev. E, 62(2):20892097, 2000. [13] Christian Walder, Olivier Chapelle, and Bernhard Scholkopf. Implicit surface modelling as an eigenvalue problem. Proceedings of the 22nd International Conference on Machine Learning, 2005. [14] M. O. Franz and P. V. Gehler. How to choose the covariance for gaussian process regression independently of the basis. In Proc. Gaussian Processes in Practice Workshop, 2006. [15] J. Quionero Candela and C. E. Rasmussen. A unifying view of sparse approximate gaussian process regression. Journal of Machine Learning Research, 6:19351959, 12 2005.\n\n\f\n", "award": [], "sourceid": 3034, "authors": [{"given_name": "Christian", "family_name": "Walder", "institution": null}, {"given_name": "Olivier", "family_name": "Chapelle", "institution": null}, {"given_name": "Bernhard", "family_name": "Sch\u00f6lkopf", "institution": null}]}