{"title": "Fast Embedding of Sparse Similarity Graphs", "book": "Advances in Neural Information Processing Systems", "page_first": 571, "page_last": 578, "abstract": "", "full_text": "Fast Embedding of\n\nSparse Music Similarity Graphs\n\nJohn C. Platt\n\nMicrosoft Research\n1 Microsoft Way\n\nRedmond, WA 98052 USA\njplatt@microsoft.com\n\nAbstract\n\nThis paper applies fast sparse multidimensional scaling (MDS) to a large\ngraph of music similarity, with 267K vertices that represent artists, al-\nbums, and tracks; and 3.22M edges that represent similarity between\nthose entities. Once vertices are assigned locations in a Euclidean space,\nthe locations can be used to browse music and to generate playlists.\nMDS on very large sparse graphs can be effectively performed by a\nfamily of algorithms called Rectangular Dijsktra (RD) MDS algorithms.\nThese RD algorithms operate on a dense rectangular slice of the distance\nmatrix, created by calling Dijsktra a constant number of times. Two RD\nalgorithms are compared: Landmark MDS, which uses the Nystr(cid:246)m ap-\nproximation to perform MDS; and a new algorithm called Fast Sparse\nEmbedding, which uses FastMap. These algorithms compare favorably\nto Laplacian Eigenmaps, both in terms of speed and embedding quality.\n\n1\n\nIntroduction\n\nThis paper examines a general problem: given a sparse graph of similarities between a set of\nobjects, quickly assign each object a location in a low-dimensional Euclidean space. This\ngeneral problem can arise in several different applications: the paper addresses a speci(cid:2)c\napplication to music similarity.\nIn the case of music similarity, a set of musical entities (i.e., artists, albums, tracks) must\nbe placed in a low-dimensional space. Human editors have already supplied a graph of\nsimilarities, e.g., artist A is similar to artist B. There are three good reasons to embed a\nmusical similarity graph:\n\n1. Visualization (cid:151) If a user\u2019s musical collection is placed in two dimensions, it can\n\nbe easily visualized on a display. This visualization can aid musical browsing.\n\n2. Interpolation (cid:151) Given a graph of similarities, it is simple to (cid:2)nd music that\n(cid:147)sounds like(cid:148) other music. However, once music is embedded in a low-\ndimensional space, new user interfaces are enabled. For example, a user can spec-\nify a playlist by starting at song A and ending at song B, with the songs in the\nplaylist smoothly interpolating between A and B.\n\n\f3. Compression (cid:151) In order to estimate (cid:147)sounds like(cid:148) directly from a graph of music\nsimilarities, the user must have access to the graph of all known music. However,\nonce all of the musical entities are embedded, the coordinates for the music in a\nuser\u2019s collection can be shipped down to the user\u2019s computer. These coordinates\nare much smaller than the entire graph.\n\nIt is important to have algorithms that exploit the sparseness of similarity graphs because\nlarge-scale databases of similarities are very often sparse. Human editors cannot create a\ndense N (cid:2) N matrix of music similarity for large values of N. The best editors can do is\nidentify similar artists, albums, and tracks. Furthermore, humans are poor at accurately\nestimating large distances between entities (e.g., which is farther away from The Beatles:\nEnya or Duke Ellington?)\nHence, there is a de(cid:2)nite need for an scalable embedding algorithm that can handle a sparse\ngraph of similarities, generalizing to similarities not seen in the training set.\n\n1.1 Structure of Paper\n\nThe paper describes three existing approaches to the sparse embedding problem in section\n2 and section 3 describes a new algorithm for solving the problem. Section 4.1 veri(cid:2)es\nthat the new algorithm does not get stuck in local minima and section 4.2 goes into further\ndetail on the application of embedding musical similarity into a low-dimensional Euclidean\nspace.\n\n2 Methods for Sparse Embedding\n\nMultidimensional scaling (MDS) [4] is an established branch of statistics that deals with\nembedding objects in a low-dimensional Euclidean space based on a matrix of similarities.\nMore speci(cid:2)cally, MDS algorithms take a matrix of dissimilarities drs and (cid:2)nd vectors ~xr\nwhose inter-vector distances drs are well matched to drs. A common (cid:3)exible algorithm is\ncalled ALSCAL [13], which encourages the inter-vector distances to be near some ideal\nvalues:\n\n~xr (cid:229)\nmin\n\nrs\n\n(cid:136)d2\nrs)2;\n\n(d2\n\nrs (cid:0)\n\n(1)\n\nwhere (cid:136)d are derived from the dissimilarities drs, typically through a linear relationship.\nThere are three existing approaches for applying MDS to large sparse dissimilarity matri-\nces:\n1. Apply an MDS algorithm to the sparse graph directly.\nNot all MDS algorithms require a dense matrix drs. For example, ALSCAL can operate on\na sparse matrix by ignoring missing terms in its cost function (1). However, as shown in\nsection 4.1, ALSCAL cannot reconstruct the position of known data points given a sparse\nmatrix of dissimilarities.\n2. Use a graph algorithm to generate a full matrix of dissimilarities.\nThe Isomap algorithm [14] (cid:2)nds an embedding of a sparse set of dissimilarities into a low-\ndimensional Euclidean space. Isomap (cid:2)rst applies Floyd\u2019s shortest path algorithm [9] to\n(cid:2)nd the shortest distance between any two points in the graph, and then uses these N (cid:2) N\ndistances as input to a full MDS algorithm. Once in the low-dimensional space, data can\neasily be interpolated or extrapolated. Note that the systems in [14] have N = 1000.\nFor generalizing musical artist similarity, [7] also computes an N (cid:2) N matrix of distances\nbetween all artists in a set, based on the shortest distance through a graph. The sparse\n\n\fgraph in [7] was generated by human editors at the All Music Guide. [7] shows that human\nperception of artist similarity is well modeled by generalizing using the shortest graph\ndistance. Similar to [14], [7] projects the N (cid:2) N set of artist distances into a Euclidean\nspace by a full MDS algorithm. Note that the MDS system in [7] has N = 412.\nThe computational complexity for these methods inhibit their use on large data sets. Let us\nanalyze the complexity for each portion of this method.\nFor (cid:2)nding all of the minimum distances, Floyd\u2019s algorithm operates on a dense matrix of\ndistances and has computational complexity O(N3). A better choice is to run Dijkstra\u2019s\nalgorithm [6], which (cid:2)nds the minimum distances from a single vertex to all other vertices\nin the graph. Thus, Dijkstra\u2019s algorithm must be run N times. The complexity of one invo-\ncation of Dijkstra\u2019s algorithm (when implemented with a binary heap [11]) is O(M logN)\nwhere M is the number of edges in the graph.\nRunning a standard MDS algorithm on a full N (cid:2) N matrix of distances requires O(N 2Kd)\ncomputation, where K is the number of iterations of the MDS algorithm and d is the di-\nmensionality of the embedding. Therefore, the overall computational complexity of the\napproach is O(MN logN + N2Kd); which can be prohibitive for large N and M.\n3. Use a graph algorithm to generate a thin dense rectangle of distances.\nOne natural way to reduce the complexity of the graph traversal part of Isomap is to not run\nDijkstra\u2019s algorithm N times. In other words, instead of generating the entire N (cid:2) N matrix\nof dissimilarities, generate an interesting subset of n rows, n << N.\nThere are a family of MDS algorithms, here called Rectangular Dijkstra (RD) MDS al-\ngorithms. RD algorithms operate on a dense rectangle of distances, (cid:2)lled in by Dijkstra\u2019s\nalgorithm. The (cid:2)rst published member of this family was Landmark MDS (LMDS) [5].\nBengio, et al.[2] show that LMDS is the Nystr(cid:246)m approximation [1] combined with clas-\nsical MDS [4] operating on the rectangular distance matrix. (See also [10] for Nystr(cid:246)m\napplied to spectral clustering).\nLMDS operates on a number of rows proportional to the embedding dimensionality, d.\nThus, Dijkstra gets called O(d) times. LMDS then centers the n (cid:2) n distance submatrix,\nconverting it into a kernel matrix K. The top d column eigenvectors (~vi) and eigenvalues li\nof K are then computed. The embedding coordinate for the mth point is thus\n\n~xm =\n\n1\n2 (cid:229)\n\nj\n\nMi j(A j (cid:0) D jm);\n\n(2)\n\ni =pli, A j is the average distance in the jth row of the rectangular distance\nwhere Mi j = ~vT\nmatrix and D jm is the distance between the mth point and the jth point ( j 2 [1::n]). Thus,\nthe computational complexity of LMDS is O(Md logN + Nd2 + d3):\n\n3 New Algorithm: Fast Sparse Embedding\n\nLMDS requires the solution of an n (cid:2) n eigenproblem. To avoid this eigenproblem, this\npaper presents a new RD MDS algorithm, called FSE (Fast Sparse Embedding). Instead of\na Nystr(cid:246)m approximation, FSE uses FastMap [8]: an MDS algorithm that takes a constant\nnumber of rows of the dissimilarity matrix. FastMap iterates over the dimensions of the\nprojection, (cid:2)xing the position of all vertices in each dimension in turn. FastMap thus\napproximates the solution of the eigenproblem through de(cid:3)ation.\nConsider the (cid:2)rst dimension. Two vertices (~xa;~xb) are chosen and the dissimilarity from\nthese two vertices to all other vertices i are computed: (dai;dbi). In FSE, these dissimi-\nlarities are computed by Dijkstra\u2019s algorithm. During the (cid:2)rst iteration (dimension), the\ndistances (dai;dbi) are set equal to the dissimilarities.\n\n\fThe 2N distances can determine the location of the vertices along the dimension up to a\nshift, through use of the law of cosines:\n\nxi =\n\nai (cid:0) d2\nd2\n2dab\n\nbi\n\n:\n\n(3)\n\nFor each subsequent dimension, two new vertices are chosen and new dissimilarities\n(dai;dbi) are computed by Dijkstra\u2019s algorithm. The subsequent dimensions are assumed to\nbe orthogonal to previous ones, so the distances for dimension N are computed from the\ndissimilarities via:\n\nd2\nai = d2\n\nai +\n\nN(cid:0)1\n(cid:229)\n\nn=1\n\n(xan (cid:0) xin)2\n\n) d2\n\nai = d2\n\nai (cid:0)\n\nN(cid:0)1\n(cid:229)\n\nn=1\n\n(xan (cid:0) xin)2:\n\n(4)\n\nThus, each dimension accounts for a fraction of the dissimilarity matrix, analogous to PCA.\nNote that, except for dab, all other distances are needed as distance squared, so only one\nsquare root for each dimension is required. The distances produced by Dijkstra\u2019s algorithm\nare the minimum graph distances modi(cid:2)ed by equation (4) in order to re(cid:3)ect the projection\nused so far.\nFor each dimension, the vertices a and b are heuristically chosen to be as far apart as\npossible. In order to avoid an O(N2) step in choosing a and b, [8] recommends starting\nwith an arbitrary point, (cid:2)nding the point furthest away from the current point, then setting\nthe current point to the farthest point and repeating.\nThe work of each Dijkstra call (including equation (4)) is O(M logN + Nd), so the com-\nplexity of the entire algorithm is O(Md logN + Nd2).\n\n4 Experimental Results\n\n4.1 Arti(cid:2)cial Data\n\n6.4\n\n6.2\n\n6\n\n5.8\n\n5.6\n\n5.4\n\n5.2\n\n5\n\n4.8\n\n4.6\n\n4.4\n\n4.5\n\nOutput of ALSCAL\n\nOutput of FSE\n\n10\n\n9\n\n8\n\n7\n\n6\n\n5\n\n4\n\n3\n\n2\n\n1\n\n5\n\n5.5\n\n6\n\n6.5\n\n2\n\n4\n\n6\n\n8\n\n10\n\nFigure 1: Reconstructing a grid of points directly from a sparse distance matrix. On the\nleft, ALSCAL cannot reconstruct the grid, while on the right, FSE accurately reconstructs\nthe grid.\n\nAn MDS algorithm needs to be tested on distance matrices that are computed from dis-\ntances between real points, in order to verify that the algorithm quickly produces sensible\nresults.\n\n\fFSE and ALSCAL were both tested on a set of 100 points in a 10 (cid:2) 10 2D grid with unit\nspacing. The distance from each point to a random 10 of the nearest 20 other points were\npresented to each algorithm. The results are shown in Figure 1. Procrustes analysis [4] is\napplied to output of each algorithm; the output is shown after the best orthogonal af(cid:2)ne\nprojection between the algorithm output and the original data.\nFigure 1 shows that ALSCAL does a very poor job of reconstructing the locations of the\ndata points, while FSE accurately reconstructs the grid locations. ALSCAL\u2019s poor per-\nformance is caused by performing optimization on a non-convex cost function. When the\ndissimilarity matrix is very sparse, there are not enough constraints on the (cid:2)nal solution,\nso ALSCAL gets stuck in a local minimum. Similar results were seen from Sammon\u2019s\nmethod [4].\nThese results show that FSE (and other RD MDS algorithms) are preferable to using sparse\nMDS algorithms. FSE does not solve an optimization problem, hence does not get stuck in\na local minimum.\n\n4.2 Application: Generalizing Music Similarity\n\nThis section presents the results of using RD MDS algorithms to project a large music\ndissimilarity graph into low-dimensional Euclidean space. This projection enables visual-\nization and interpolation over music collections.\nThe dissimilarity graph was derived from a music metadata database. The database consists\nof 10289 artists, 67799 albums, and 188749 tracks. Each track has subjective metadata\nassigned to it by human editors: style (speci(cid:2)c style), subgenre (more general style), vocal\ncode (gender of singer), and mood. See [12] for more details on the metadata. The database\ncontains which tracks occur on which albums and which artists created those albums.\n\nRelationship Between Entities\nTwo tracks have same style, vocal code, mood\nTwo tracks have same style\nTwo tracks have same subgenre\nTrack is on album\nAlbum is by artist\n\nEdge Distance in Graph\n\n1\n2\n4\n1\n2\n\nTable 1: Mapping of relationship to edge distance.\n\nA sparse similarity graph was extracted from the metadata database according to Table 1.\nEvery track, album, and artist are represented by a vertex in the graph. Every track was\nconnected to all albums it appeared on, while each album was connected to its artist. The\ntrack similarity edges were sampled randomly, to provide an average of 7 links of edges of\ndistance 1, 2, and 4. The (cid:2)nal graph contained 267K vertices and 3.22M edges. RD MDS\nenabled this experiment: the full distance matrix would have taken days to compute with\n267K calls to Dijkstra. Also, the graph distances were derived after some tuning (not on\nthe test set): the speed of RD MDS enabled this tuning.\nOne advantage of the music application is that the quality of the embedding can be tested\nexternally. A test set of 50 playlists, with 444 pairs of sequential songs was gathered from\nreal users who listened to these playlists. An embedding is considered good if sequential\nsongs in the playlists are frequently closer to each other than random songs in the database.\nTable 2 shows the quality of the embedding as a fraction of random songs that are closer\nthan sequential songs. The lower the fraction, the better the embedding, because the em-\nbedding more accurately re(cid:3)ects users\u2019 ideas of music similarity. This fraction is computed\nby treating the pairwise distances as scores from a classi(cid:2)er, computing an ROC curve, then\ncomputing 1.0-the area under the ROC curve [3].\n\n\fAlgorithm\n\nn\n\n60\nFSE\n60\nLMDS\n100\nLMDS\n200\nLMDS\nLMDS\n400\nLaplacian Eigenmaps N/A\n\nAverage % of\n\nRandom Songs Closer\nthan Sequential Songs\n\n5.0%\n4.5%\n4.1%\n3.3%\n3.2%\n13.0%\n\nCPU time\n\n(sec)\n\n52.8\n52.7\n87.4\n175.0\n355.1\n8003.4\n\nTable 2: Speed and accuracy of music embedding for various algorithms.\n\nAll embeddings are 20-dimensional (d = 20). The CPU time was measured on a 2.4 GHz\nPentium 4. FSE uses a (cid:2)xed rectangle size n = 3d, so has one entry in the table. For the\nsame n, FSE and LMDS are competitive. However, LMDS can trade off speed for accuracy\nby increasing n.\nA Laplacian Eigenmap applied to the entire sparse similarity matrix was much slower than\neither of the RD MDS algorithms, and did not perform as well for this problem. A Gaussian\nkernel with s = 2 was used to convert distances to similarities for the Laplacian Eigenmap.\nThe slowness of the Laplacian eigenmap prevented extensive tuning of the parameters.\n\nBob Dylan\nCat Stevens\n\n2.5\nThe Eagles\n\nAerosmith\n\nThe Beatles\n\nThe Who\n\nLed Zeppelin\n\nThe Doors\n\n2\n\n1.5\n\n1\n\n0.5\n\n0\n\u22120.5\n\nJimi Hendrix\n\nTalking Heads\n\nThe Police\n\nBryan Ferry\n\nFleetwood Mac\n\nDire Straits\n\nThe Rolling Stones\n\nKate Bush\n\nGenesis\n\nSheryl Crow\n\nSuzanne Vega\n\nAlanis Morissette\n\nPeter Gabriel\n\n0\n\n0.5\n\n1\n\nSarah McLachlan\n\nTori Amos\n1.5\n\n2\n\n2.5\n\nFigure 2: LMDS Projection of the entire music dissimilarity graph into 2D. The coordinates\nof 23 artists are shown.\n\nGiven that LMDS outperforms FSE for large n, this paper now presents qualitative results\nfrom the LMDS n = 400 projection. First, the top two dimensions are plotted to form a\nvisualization of music space. This visualization is shown in Figure 4.2, which shows the\n\n\fcoordinates of 23 artists that occur near the center of the space. Even restricted to the top\ntwo dimensions, the projection is sensible. For example, Tori Amos and Sarah McLachlan\nare mapped to be very close.\n\nArtist 1\nJimi Hendrix\nJimi Hendrix\nJimi Hendrix\nJimi Hendrix\nJimi Hendrix\nJimi Hendrix\nDoors\nDoors\nDoors\nDoors\nCat Stevens\nCat Stevens\nCat Stevens\nCat Stevens\nThe Beatles\nThe Beatles\nThe Beatles\nThe Beatles\nThe Beatles\nThe Beatles\n\nTrack 1\nPurple Haze\nFire\nRed House\nI Don\u2019t Live Today\nFoxey Lady\n3rd Stone from the Sun\nWaiting for the Sun\nLA Woman\nRiders on the Storm\nLove her Madly\nReady\nMusic\nJesus\nKing of Trees\nOctopus\u2019s Garden\nI\u2019m So Tired\nRevolution 9\nSgt. Pepper\u2019s Lonely\nPlease Please Me\nEleanor Rigby\n\nTrack 2\nHand In My Pocket\nAll I Really Want\nYou Oughta Know\nRight Through You\nYou Learn\nIronic\nFull of Grace\n\nArtist 2\nAlanis\nAlanis\nAlanis\nAlanis\nAlanis\nAlanis\nSarah McLachlan\nSarah McLachlan Hold On\nSarah McLachlan Good Enough\nSarah McLachlan\nSarah McLachlan\nBlondie\nSarah McLachlan\nSarah McLachlan\nFiona Apple\nFiona Apple\nFiona Apple\nBlondie\nBlondie\nBlondie\n\nThe Path of Thorns\nPossession\nTide is High\nIce Cream\nFumbling Towards Ecstasy\nLimp\nPaper Bag\nFast As You Can\nCall Me\nHanging on the Telephone\nRapture\n\nTable 3: Two playlists produced by the system. Each playlist reads top to bottom. The\nplaylists interpolate between the (cid:2)rst and last songs.\n\nThe main application for the music graph projection is the generation of playlists. There\nare several different possible objectives for music playlists: background listening, dance\nmixes, music discovery. One of the criteria for playlists is that they play similar music\ntogether (i.e., avoid distracting jumps, like New Age to Heavy Metal). The goal for this\npaper is to generate playlists for background listening. Therefore, the only criterion we\nuse for generation is smoothness and playlists are generated by linear interpolation in the\nembedding space.\nHowever, smoothness is not the only possible playlist generation mode: other criteria can\nbe added (such as matching beats or artist self-avoidance or minimum distance between\nsongs). These criteria can be added on top of the smoothness criteria. Such criteria are a\nmatter of subjective musical taste and are beyond the scope of this paper.\nTable 3 shows two background-listening playlists formed by interpolating in the projected\nspace. The playlists were drawn from a collection of 3920 songs. Unlike the image in-\nterpolation in [14], not every point in the 20-dimensional space has a valid song attached\nto it. The interpolation was performed by (cid:2)rst computing the line segment connecting the\n(cid:2)rst and last song, and then placing K equally-spaced points along the line segment, where\nK is the number of slots in the playlist. For every slot, the location of the previous song\nis projected onto a hyperplane normal to the line segment that goes through the ith point.\nThe projected location is then moved halfway to the ith point, and the nearest song to\nthe moved location is placed into the playlist. This method provides smooth interpolation\nwithout large jumps, as can be seen in Table 3.\n\n\f5 Discussion and Conclusions\n\nMusic playlist generation and browsing can utilize a large sparse similarity graph designed\nby editors. In order to allow tractable computations on this graph, its vertices can be pro-\njected into a low-dimensional space. This projection enables smooth interpolation and\ntwo-dimensional display of music.\nMusic similarity graphs are amongst the largest graphs ever to be embedded. Rectangular\nDijkstra MDS algorithms can be used to ef(cid:2)ciently embed these large sparse graphs. This\npaper showed that FSE and the Nystr(cid:246)m (LMDS) technique are both ef(cid:2)cient and have\ncomparable performance for the same size of rectangle. Both algorithms are much more\nef(cid:2)cient than Laplacian Eigenmaps. However, LMDS permits an accuracy/speed trade-off\nthat makes it preferable. Using LMDS, a music graph with 267K vertices and 3.22M edges\ncan be embedded in approximately 6 minutes.\n\nReferences\n[1] C. Baker. The numerical treatment of integral equations. Clarendon Press, Oxford,\n\n1977.\n\n[2] Y. Bengio, J.-F. Paiement, and P. Vincent. Out-of-sample extensions for LLE, Isomap,\nMDS, Eigenmaps and spectral clustering. In S. Thrun, L. Saul, and B. Sch\u0142\"lkopf,\neditors, Proc. NIPS, volume 16, 2004.\n\n[3] A. P. Bradley. The user of area under the ROC curve in the evaluation of machine\n\nlearning algorithms. Pattern Recognition, 30:1145(cid:150)1159, 1997.\n\n[4] T. F. Cox and M. A. A. Cox. Multidimensional Scaling. Number 88 in Monographs\n\non Statistics and Applied Probability. Chapman & Hall/CRC, 2nd edition, 2001.\n\n[5] V. de Silva and J. B. Tenenbaum. Global versus local methods in nonlinear dimen-\nsionality reduction. In S. Becker, S. Thrun, and K. Obermayer, editors, Proc. NIPS,\nvolume 15, pages 721(cid:150)728, 2003.\n\n[6] E. W. Dijkstra. A note on two problems in connexion with graphs. Numerical Math-\n\nematics, 1:269(cid:150)271, 1959.\n\n[7] D. P. W. Ellis, B. Whitman, A. Berenzweig, and S. Lawrence. The quest for ground\ntruth in musical artist similarity. In Proc. International Conference on Music Infor-\nmation Retrieval (ISMIR), 2002.\n\n[8] C. Faloutsos and K.-I. Lin. Fastmap: A fast algorithm for indexing, data-mining and\nvisualization of traditional and multimedia databases. In Proc. ACM SIGMOD, pages\n163(cid:150)174, 1995.\n\n[9] R. Floyd. Algorithm 97 (shortest path). Communications of the ACM, 7:345, 1962.\n[10] C. Fowlkes, S. Belongie, and J. Malik. Ef(cid:2)cient spatiotemporal grouping using the\n\nNystr(cid:246)m method. In Proc. CVPR, volume 1, pages I(cid:150)231(cid:150)I(cid:150)238, 2001.\n\n[11] D. B. Johnson. Ef(cid:2)cient algorithms for shortest paths in sparse networks. JACM,\n\n24:1(cid:150)13, 1977.\n\n[12] J. C. Platt, C. J. C. Burges, S. Swenson, C. Weare, and A. Zheng. Learning a gaussian\nprocess prior for automatically generating music playlists. In T. Dietterich, S. Becker,\nand Z. Ghahramani, editors, Proc. NIPS, volume 14, pages 1425(cid:150)1432, 2002.\n\n[13] Y. Takane, F. W. Young, and J. de Leeuw. Nonmetric individual differences multidi-\nmensional scaling: an alternating least squares method with optimal scaling features.\nPsychometrika, 42:7(cid:150)67, 1977.\n\n[14] J. B. Tenenbaum. Mapping a manifold of perceptual observations.\n\nIn M. Jordan,\n\nM. Kearns, and S. Solla, editors, Proc. NIPS, volume 10, pages 682(cid:150)688, 1998.\n\n\f", "award": [], "sourceid": 2510, "authors": [{"given_name": "John", "family_name": "Platt", "institution": null}]}