{"title": "A Local Learning Approach for Clustering", "book": "Advances in Neural Information Processing Systems", "page_first": 1529, "page_last": 1536, "abstract": null, "full_text": "A Local Learning Approach for Clustering\n\n Mingrui Wu, Bernhard Scholkopf Max Planck Institute for Biological Cybernetics 72076 Tubingen, Germany {mingrui.wu, bernhard.schoelkopf}@tuebingen.mpg.de\n\nAbstract\nWe present a local learning approach for clustering. The basic idea is that a good clustering result should have the property that the cluster label of each data point can be well predicted based on its neighboring data and their cluster labels, using current supervised learning methods. An optimization problem is formulated such that its solution has the above property. Relaxation and eigen-decomposition are applied to solve this optimization problem. We also briefly investigate the parameter selection issue and provide a simple parameter selection method for the proposed algorithm. Experimental results are provided to validate the effectiveness of the proposed approach.\n\n1 Introduction\nIn the multi-class clustering problem, we are given n data points, x1 , . . . , xn , and a positive integer c. The goal is to partition the given data xi (1 i n) into c clusters, such that different clusters are in some sense \"distinct\" from each other. Here xi X Rd is the input data, X is the input space. Clustering has been widely applied for data analysis tasks. It identifies groups of data, such that data in the same group are similar to each other, while data in different groups are dissimilar. Many clustering algorithms have been proposed, including the traditional k-means algorithm and the currently very popular spectral clustering approach [3, 10]. Recently the spectral clustering approach has attracted increasing attention due to its promising performance and easy implementation. In spectral clustering, the eigenvectors of a matrix are used to reveal the cluster structure in the data. In this paper, we propose a clustering method that also has this characteristic. But it is based on the local learning idea. Namely, the cluster label of each data point should be well estimated based on its neighboring data and their cluster labels, using current supervised learning methods. An optimization problem is formulated whose solution can satisfy this property. Relaxation and eigen-decomposition are applied to solve this problem. As will be seen later, the proposed algorithm is also easy to implement while it shows better performance than the spectral clustering approach in the experiments. The local learning idea has already been successfully applied in supervised learning problems [1]. This motivates us to incorporate it into clustering, an important unsupervised learning problem. Adapting valuable supervised learning ideas for unsupervised learning problems can be fruitful. For example, in [9] the idea of large margin, which has proved effective in supervised learning, is applied to the clustering problem and good results are obtained. The remaining of this paper is organized as follows. In section 2, we specify some notation that will be used in later sections. The details of our local learning based clustering algorithm are presented in section 3. Experimental results are then provided in section 4, where we also briefly investigate the parameter selection issue for the proposed algorithm. Finally we conclude the paper in the last section.\n\n\f\n2 Notations\nIn the following, \"neighboring points\" or \"neighbors\" of xi simply refers the nearest neighbors of xi according to some distance metric. n c Cl Ni ni Diag (M) the total number of data. the number of clusters to be obtained. the set of points contained in the l-th cluster, 1 l c. the set of neighboring points of xi , 1 i n, not including xi itself. |Ni | , i.e. the number of neighboring points of xi . the diagonal matrix with the same size and the same diagonal elements as M, where M is an arbitrary square matrix.\n\n3 Clustering via Local Learning\n3.1 Local Learning in Supervised Learning\n\nIn supervised learning algorithms, a model is trained with all the labeled training data and is then used to predict the labels of unseen test data. These algorithms can be called global learning algorithms as the whole training dataset is used for training. In contrast, in local learning algorithms [1], for a given test data point, a model is built only with its neighboring training data, and then the label of the given test point is predicted by this locally learned model. It has been reported that local learning algorithms often outperform global ones [1] as the local models are trained only with the points that are related to the particular test data. And in [8], it is proposed that locality is a crucial parameter which can be used for capacity control, in addition to other capacity measures such as the VC dimension. 3.2 Representation of Clustering Results\n\nThe procedure of our clustering approach largely follows that of the clustering algorithms proposed in [2, 10]. We also use a Partition Matrix (PM) P = [pil ] {0, 1}nc to represent a clustering scheme. Namely pil = 1 if xi (1 i n) is assigned to cluster Cl (1 l c), otherwise pil = 0. So in each row of P, there is one and only one element that equals 1, all the others equal 0. As in [2, 10], instead of computing the PM directly to cluster the given data, we compute a Scaled 1 Partition Matrix (SPM) F defined by: F = P(P P)- 2 . (The reason for this will be given later.) As P| P is diagonal, the l-th (1 l c) column of F is just the l-th column of P multiplied by 1/ Cl |. Clearly we have F\nF\n\n= (P\n\nP -1 2\n\n)\n\nP\n\nP\n\n(P\n\nP -1 2\n\n)\n\n=I\n\n(1)\n\nwhere I is the unit matrix. Given a SPM F, we can easily restore the corresponding PM P with a mapping P () defined as 1 P = P (F) = Diag (FF )- 2 F (2)\nl l In the following, we will also express F as: F = [f 1 , . . . , f c ] Rnc , where f l = [f1 , . . . , fn ] n R , 1 l c, is the l-th column of F. \n\n3.3\n\nBasic Idea\n\nThe good performance of local learning methods indicates that the label of a data point can be well estimated based on its neighbors. Based on this, in order to find a good SPM F (or equivalently a good clustering result), we propose to solve the following optimization problem: minc n l c in\n=1 =1\n\nFR\n\n(fil - ol (xi ))2 = i\n\nlc\n=1\n\nfl\n\n- ol\n\n2\n\n(3) (4)\n\nsubject to\n\nF is a scaled partition matrix\n\n\f\nwhere ol () denotes the output function of a Kernel Machine (KM), trained with some supervised i l l kernel learning algorithms [5], using the training data {(xj , fj )}xj Ni , where fj is used as the label of xj for training this KM. In (3), ol = [ol (x1 ), . . . , ol (xn )] Rn . Details on how to compute n 1 ol (xi ) will be given later. For the function ol (), the superscript l indicates that it is for the l-th i i cluster, and the subscript i means the KM is trained with the neighbors of xi . Hence apart from xi , l l the training data {(xj , fj )}xj Ni also influence the value of ol (xi ). Note that fj (xj Ni ) are also i variables of the problem (3)(4). To explain the idea behind problem (3)(4), let us consider the following problem:\nl Problem 1. For a data point xi and a cluster Cl , given the values of fj at xj Ni , what should be l the proper value of fi at xi ?\n\nThis problem can be solved by supervised learning. In particular, we can build a KM with the l training data {(xj , fj )}xj Ni . As mentioned before, let ol () denote the output function of this i locally learned KM, then the good performance of local learning methods mentioned above implies that ol (xi ) is probably a good guess of fil , or the proper fil should be similar as ol (xi ). i i Therefore, a good SPM F should have the following property: For any xi (1 i n) and any cluster Cl (1 l c), the value of fil can be well estimated based on the neighbors of xi . That is, l fil should be similar to the output of the KM that is trained locally with the data {(xj , fj )}xj Ni . This suggests that in order to find a good SPM F, we can solve the optimization problem (3)(4). We can also explain our approach intuitively as follows. A good clustering method will put the data into well separated clusters. This implies that it is easy to predict the cluster membership of a point based on its neighbors. If, on the other hand, a cluster is split in the middle, then there will be points at the boundary for which it is hard to predict which cluster they belong to. So minimizing the objective function (3) favors the clustering schemes that do not split the same group of data into different clusters. Moreover, it is very difficult to construct local clustering algorithms in the same way as for supervised learning. In [1], a local learning algorithm is obtained by running a standard supervised algorithm on a local training set. This does not transfer to clustering. Rather than simply applying a given clustering algorithm locally and facing the difficulty to combine the local solution into a global one, problem (3)(4) seeks a global solution with the property that locally for each point, its cluster assignment looks like the solution that we would obtain by local learning if we knew the cluster assignment of its neighbors. 3.4 Computing ol (xi ) i\n\nHaving explained the basic idea, now we have to make the problem (3)(4) more specific to build l a concrete clustering algorithm. So we consider, based on xi and {(xj , fj )}xj Ni , how to coml pute oi (xi ) with kernel learning algorithms. It is well known that applying many kernel learning l algorithms on {(xj , fj )}xj Ni will result in a KM, according to which ol (xi ) can be calculated as: i x l ol (xi ) = ij K (xi , xj ) (5) i\nj Ni\n\nl where K : X X R is a positive definite kernel function [5], and ij are the expansion coefl ficients. In general, any kernel learning algorithms can be applied to compute the coefficients ij . Here we choose the ones that make the problem (3)(4) easy to solve. To this end, we adopt the Kernel Ridge Regression (KRR) algorithm [6], with which we can obtain an analytic expression of l ol (xi ) based on {(xj , fj )}xj Ni . Thus for each xi , we need to solve the following KRR training i problem: 2 K l - fil (6) min ( l ) Ki l + ii i i l Rni i\n\nwhere\n\nl Rni is the vector of the expansion coefficients, i.e. l = [ij ] for xj Ni , > 0 is i fl f or xj Ni , and Ki Rni ni is the regularization parameter, fil Rni denotes the vector j the kernel matrix over xj Ni , namely Ki = [K (xu , xv )], for xu , xv Ni .\n\nl i\n\n\f\nSolving problem (6) leads to l = (Ki + I)-1 fil . Substituting it into (5), we have i ol (xi ) = ki (Ki + I)-1 fil i\nni f\n\n(7)\n\nwhere ki R denotes the vector [K (xi , xj )] or xj Ni . Equation (7) can be written as a linear equation: ol (xi ) = i fil (8) i ni where i R is computed as i = ki (Ki + I)-1 (9) l It can be seen that i is independent of fi and the cluster index l, and it is different for different xi . Note that fil is a sub-vector of f l , so equation (8) can be written in a compact form as: ol = Af l\nl l nn\n\n(10)\n\nwhere o and f are the same as in (3), while the matrix A = [aij ] R is constructed as follows: xi and xj , 1 i, j n, if xj Ni , then aij equals the corresponding element of i in (9), otherwise aij equals 0. Similar as i , the matrix A is also independent of f l and the cluster index l. Substituting (10) into (3) results in a more specific optimization problem: lc f 2 lc l T l l min - Af l = (f ) f = trace(F\nFRnc =1 =1\n\nT\n\nF)\n\n(11) (12) (13)\n\nsubject to where\n\nF is a scaled partition matrix T = (I - A)\n(\n\nI - A)\n\nThus, based on the KRR algorithm, we have transformed the objective function (3) into the quadratic function (11). 3.5 Relaxation\n\nFollowing the method in [2, 10], we relax F into the continuous domain and combine the property (1) into the problem (11)(12), so as to turn it into a tractable continuous optimization problem:\nFRnc\n\nmin\n\ntrace(F F\nF\n\nT\n\nF)\n\n(14) (15)\n\nsubject to\n nc\n\n=I\n\nLet F R denote the matrix whose columns consist of c eigenvectors corresponding to the c smallest eigenvalues of the symmetric matrix T. Then it is known that the global optimum of the above problem is not unique, but a subspace spanned by the columns of F through orthonormal matrices [10]: {F R : R Rcc , R R = I} (16) Now we can see that working on the SPM F allows us to make use of the property (1) to construct a tractable continuous optimization problem (14)(15), while working directly on the PM P does not have this advantage. 3.6 Discretization: Obtaining the Final Clustering Result\n\nAccording to [10], to get the final clustering result, we need to find a true SPM F which is close to the subspace (16). To this end, we apply the mapping (2) on F to obtain a matrix P = P (F ). It can be easily proved that for any orthogonal matrix R Rcc , we have P (F R) = P R. This equation implies that if there exists an orthogonal matrix R such that F R is close to a true SPM F, then P R should also be near to the corresponding discrete PM P. To find such an orthogonal matrix R and the discrete PM P, we can solve the following optimization problem [10]:\nPRnc ,RRcc\n\nmin\n\nP-P\n\nR\n\n2 P1 c = 1 n\n\n(17) (18) (19)\n\nsubject to\n\nP {0, 1}nc , R\nR\n\n=I\n\n\f\nwhere 1c and 1n denote the c dimensional and the n dimensional vectors of all 1's respectively. Details on how to find a local minimum of the above problem can be found in [10]. In [3], a method using k-means algorithm is proposed to find a discrete PM P based on P . In this paper, we adopt the approach in [10] to get the final clustering result. 3.7 Comparison with Spectral Clustering\n\nOur Local Learning based Clustering Algorithm (LLCA) also uses the eigenvalues of a matrix (T in (13)) to reveal the cluster structure in the data, therefore it can be regarded as belonging to the category of spectral clustering approaches. The matrix whose eigenvectors are used for clustering plays the key role in spectral clustering. In LLCA, this matrix is computed based on the local learning idea: a clustering result is obtained based on whether the label of each point can be well estimated base on its neighbors with a well established supervised learning algorithm. This is different from the graph partitioning based spectral clustering method. As will be seen later, LLCA and spectral clustering have quite different performance in the experiments. LLCA needs one additional step: computing the matrix T in the objective function (14). The remaining steps, i.e. computing the eigenvectors of T and discretization (cf. section 3.6) are the same as in the spectral clustering approach. According to equation (13), to compute T, we need to compute the matrix A in (10), which in turn requires calculating i in (9) for each xi .nWe can see that this is very easy to implement and A can be computed with time complexity O( i=1 n3 ). i In practice, just like in the spectral clustering method, the number of neighbors ni is usually set to a fixed small value k for all xi in LLCA. In this case, A can be computed efficiently with complexity O(nk 3 ), which scales linearly with the number of data n. So in this case the main calculation is to obtain the eigenvectors of T. Furthermore, according to (13), the eigenvectors of T are identical to the right singular vectors of I - A, which can be calculated efficiently because now I - A is sparse, each row of which contains just k + 1 nonzero elements. Hence in this case, we do not need to compute T explicitly. We conclude that LLCA is easy to implement, and in practice, the main computational load is to compute the eigenvectors of T, therefore the LLCA and the spectral clustering approach have the same order of time complexity in most practical cases.1\n\n4 Experimental Results\nIn this section, we empirically compare LLCA with the spectral clustering approach of [10] as well as with k-means clustering. For the last discretization step of LLCA (cf. section 3.6), we use the same code contained in the implementation of the spectral clustering algorithm, available at http://www.cis.upenn.edu/jshi/software/. 4.1 Datasets\n\nThe following datasets are used in the experiments. USPS-3568: The examples of handwritten digits 3, 5, 6 and 8 from the USPS dataset. USPS-49: The examples of handwritten digits 4 and 9 from the USPS dataset. UMist: This dataset consists of face images of 20 different persons. UMist5: The data from the UMist dataset, belonging to class 4, 8, 12, 16 and 20.\nSometimes we are also interested in a special case: ni = n - 1 for all xi , i.e. all the data are neighboring to each other. In this case, it can be proved that T = Q Q, where Q = (Diag (B))-1 B with B = I - K(K + I)-1 , where K is the kernel matrix over all the data points. So in this case T can be computed with time complexity O(n3 ). This is the same as computing the eigenvectors of the non-sparse matrix T. Hence the order of the overall time complexity is not increased by the step of computing T, and the above statements still hold.\n1\n\n\f\n News4a: The text documents from the 20-newsgroup dataset, covering the topics in rec., which contains autos, motorcycles, baseball and hockey. News4b: The text documents from the 20-newsgroup dataset, covering the topics in sci., which contains crypt, electronics, med and space. Further details of these datasets are provided in Table 1. Table 1: Descriptions of the datasets used in the experiments. For each dataset, the number of data n, the data dimensionality d and the number of classes c are provided. Dataset n d c USPS-3568 3082 256 4 USPS-49 1673 256 2 UMist 575 10304 20 UMist5 140 10304 5 News4a 3840 4989 4 News4b 3874 5652 4\n\nIn News4a and New4b, each document is represented by a feature vector, the elements of which are related to the frequency of occurrence of different words. For these two datasets, we extract a subset of each of them in the experiments by ignoring the words that occur in 10 or fewer documents and then removing the documents that have 10 or fewer words. This is why the data dimensionality are different in these two datasets, although both of them are from the 20-newsgroup dataset. 4.2 Performance Measure\n\nIn the experiments, we set the number of clusters equal to the number of classes c for all the clustering algorithms. To evaluate their performance, we compare the clusters generated by these algorithms with the true classes by computing the following two performance measures. 4.2.1 Normalized Mutual Information\n\nThe Normalized Mutual Information (NMI) [7] is widely used for determining the quality of clusters. For two random variable X and Y, the NMI is defined as [7]: I (X, Y) N M I (X, Y) = H (20) (X)H (Y) where I (X, Y) is the mutual information between X and Y, while H (X) and H (Y) are the entropies of X and Y respectively. One can see that N M I (X, X) = 1, which is the maximal possible value of NMI. Given a clustering result, the NMI in (20) is estimated as [7]: c n c c nl,h l=1 h=1 nl,h log nl nh ^ NMI = c (21) ^ nl log nl nh log nh ^ l=1 h=1 n n where nl denotes the number of data contained in the cluster Cl (1 l c), nh is the number of data ^ belonging to the h-th class (1 h c), and nl,h denotes the number of data that are in the intersection between the cluster Cl and the h-th class. The value calculated in (21) is used as a performance measure for the given clustering result. The larger this value, the better the performance. 4.2.2 Clustering Error Another performance measure is the Clustering Error. To compute it for a clustering result, we need to build a permutation mapping function map() that maps each cluster index to a true class label. The classification error based on map() can then be computed as: n (yi , map(ci )) err = 1 - i=1 n where yi and ci are the true class label and the obtained cluster index of xi respectively, (x, y ) is the delta function that equals 1 if x = y and equals 0 otherwise. The clustering error is defined as the minimal classification error among all possible permutation mappings. This optimal matching can be found with the Hungarian algorithm [4], which is devised for obtaining the maximal weighted matching of a bipartite graph.\n\n\f\n4.3\n\nParameter Selection\n\nIn the spectral clustering algorithm, first a graph of n nodes is constructed, each node of which corresponds to a data point, then the clustering problem is converted into a graph partition problem. In the experiments, for the spectral clustering algorithm, a weighted k -nearest neighbor graph is employed, where k is a parameter searched over the grid: k {5, 10, 20, 40, 80}. On this graph, the edge weight between two connected data points is computed with a kernel function, for which the following two kernel functions are tried respectively in the experiments. The cosine kernel: K1 (xi , xj ) = and the Gaussian kernel: K2 (xi , xj ) = exp(- xi xj xi xj 22)\n\n(\n\n1 xi - xj 2) (23) 2 2 2 2 2 2 2 2 2 The parameter in (23) is searched in: {0 /16, 0 /8, 0 /4, 0 /2, 0 , 20 , 40 , 80 , 160 }, where 0 is the mean norm of the given data xi , 1 i n. For LLCA, the cosine function (22) and the Gaussian function (23) are also adopted respectively as the kernel function in (5). The number of neighbors ni for all xi is set to a single value k . The parameters k and are searched over the same grids as mentioned above. In LLCA, there is another parameter (cf. (6)), which is selected from the grid: {0.1, 1, 1.5}. Automatic parameter selection for unsupervised learning is still a difficult problem. We propose a simple parameter selection method for LLCA as follows. For a clustering result obtained with a set of parameters, which in our case consists of k and when the cosine kernel (22) is used, or k , and when the Gaussian kernel (23) is used, we compute its corresponding SPM F and then use the objective value (11) as the evaluation criteria. Namely, the clustering result corresponding to the smallest objective value is finally selected for LLCA. For simplicity, on each dataset, we will just report the best result of spectral clustering. For LLCA, both the best result (LLCA1) and the one obtained with the above parameter selection method (LLCA2) will be provided. No parameter selection is needed for the k-means algorithm, since the number of clusters is given. 4.4 Numerical Results\n\nNumerical results are summarized in Table 2. The results on News4a and News4b datasets show that different kernels may lead to dramatically different performance for both spectral clustering and LLCA. For spectral clustering, the results on USPS-3568 are also significantly different for different kernels. It can also be observed that different performance measures may result in different performance ranks of the clustering algorithms being investigated. This is reflected by the results on USPS-3568 when the cosine kernel is used and the results on News4b when the Gaussian kernel is used. Despite all these phenomena, we can still see from Table 2 that both LLCA1 and LLCA2 outperform the spectral clustering and the k-means algorithm in most cases. We can also see that LLCA2 fails to find good parameters on News4a and News4b when the Gaussian kernel is used, while in the remaining cases, LLCA2 is either slightly worse than or identical to LLCA1. And analogous to LLCA1, LLCA2 also improves the results of the spectral clustering and the k-means algorithm on most datasets. This illustrates that our parameter selection method for LLCA can work well in many cases, and clearly it still needs improvement. Finally, it can be seen that the k-means algorithm is worse than spectral clustering, except on USPS3568 with respect to the clustering error criteria when the cosine kernel is used for spectral clustering. This corroborates the advantage of the popular spectral clustering approach over the traditional k-means algorithm.\n\n5 Conclusion\nWe have proposed a local learning approach for clustering, where an optimization problem is formulated leading to a solution with the property that the label of each data point can be well estimated\n\n\f\nTable 2: Clustering results. Both the normalized mutual information and the clustering error are provided. Two kernel functions (22) and (23) are tried for both spectral clustering and LLCA. On each dataset, the best result of the spectral clustering algorithm is reported (Spec-Clst). For LLCA, both the best result (LLCA1) and the one obtained with the parameter selection method described before (LLCA2) are provided. In each group, the best results are shown in boldface, the second best is in italics. Note that the results of k-means algorithm are independent of the kernel function. Spec-Clst LLCA1 LLCA2 k-means Spec-Clst LLCA1 LLCA2 k-means Spec-Clst LLCA1 LLCA2 k-means Spec-Clst LLCA1 LLCA2 k-means USPS-3568 0.6575 0.8720 0.8720 0.5202 0.8245 0.8493 0.8467 0.5202 32.93 3.57 3.57 22.16 5.68 4.61 4.70 22.16 USPS-49 0.3608 0.6241 0.6241 0.2352 0.4319 0.5980 0.5493 0.2352 16.56 8.01 8.01 22.30 13.51 8.43 9.80 22.30 UMist 0.7483 0.8003 0.7889 0.6479 0.8099 0.8377 0.8377 0.6479 46.26 36.00 38.43 56.35 41.74 33.91 37.22 56.35 UMist5 0.8810 1 1 0.7193 0.8773 1 1 0.7193 9.29 0 0 36.43 10.00 0 0 36.43 News4a 0.6468 0.7587 0.7587 0.0800 0.4039 0.2642 0.0296 0.0800 28.26 7.99 7.99 70.62 42.34 47.24 74.38 70.62 News4b 0.5765 0.7125 0.7125 0.0380 0.1861 0.1776 0.0322 0.0380 21.73 9.65 9.65 74.08 64.71 53.25 72.97 74.08\n\nNMI, cosine\n\nNMI, Gaussian\n\nError (%), cosine\n\nError (%), Gaussian\n\nbased on its neighbors. We have also provided a parameter selection method for the proposed clustering algorithm. Experiments show encouraging results. Future work may include improving the proposed parameter selection method and extending this work to other applications such as image segmentation.\n\nReferences\n[1] L. Bottou and V. Vapnik. Local learning algorithms. Neural Computation, 4:888900, 1992. [2] P. K. Chan, M. D. F. Schlag, and J. Y. Zien. Spectral k-way ratio-cut partitioning and clustering. IEEE Transactions on Computer-aided Design of Integrated Circuits and Systems, 13:1088 1096, 1994. [3] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: analysis and an algorithm. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14, Cambridge, MA, 2002. MIT Press. [4] C. H. Papadimitriou and K. Steiglitz. Combinatorial Optimization: Algorithms and Complexity. Dover, New York, 1998. [5] B. Scholkopf and A. J. Smola. Learning with Kernels. The MIT Press, Cambridge, MA, 2002. [6] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, Cambridge, UK, 2004. [7] A. Strehl and J. Ghosh. Cluster ensembles a knowledge reuse framework for combining multiple partitions. Journal of Machine Learning Research, 3:583617, 2002. [8] V. Vapnik. The Nature of Statistical Learning Theory. Springer Verlag, New York, 1995. [9] L. Xu, J. Neufeld, B. Larson, and D. Schuurmans. Maximum margin clustering. In L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural Information Processing Systems 17. MIT Press, Cambridge, MA, 2005. [10] S. X. Yu and J. Shi. Multiclass spectral clustering. In L. D. Raedt and S. Wrobel, editors, International Conference on Computer Vision. ACM, 2003.\n\n\f\n", "award": [], "sourceid": 3115, "authors": [{"given_name": "Mingrui", "family_name": "Wu", "institution": null}, {"given_name": "Bernhard", "family_name": "Sch\u00f6lkopf", "institution": null}]}