NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:2295
Title:q-means: A quantum algorithm for unsupervised machine learning

Reviewer 1


		
Response to rebuttal: I think the author responses were done well. I think they have satisfactorily answered the questions that I had raised. The reason I am torn between a strong accept and an accept is: most of the techniques used in this paper have already appeared before in various quantum algorithms and are well-known in the quantum community. Having said that, I think putting together known techniques in a rigorous fashion and also practically implementing their algorithm on a quantum simulator is interesting, especially to a problem which is practically important. I think the final aspect might be of interest to a classical ML community; to see how quantum can provide polynomial speedups to relevant ML problems using a toolbag of interesting techniques. I would strongly recommend this paper if there the other referees also back it and there is room for more talks. ------------------- In this paper, the authors look at the problem of k-means clustering: here one is given N d-dimensional vectors v_1,...,v_N in R^d and the goal is to partition the vectors into k classes so that the vectors within a class are close to one another (where closeness is measured as in Euclidean distance). Let V= [ v_1,.., v_N] in R^{N x d}.The complexity of an algorithm for this problem is phrased in terms of the parameters k, d, N, and parameters of V (such as condition number of V, norm of vectors in V). Classically, as far as I can say, Lloyd's algorithm for clustering runs in time O(N k^2 d) per iteration (I am surprised this paper doesn't make a single mention of the classical complexity of this problem). On a high level, in this paper they produce a quantum algorithm that runs linearly in k,d and polylogarithmic in N. In order to phrase their theorem, they consider the following variant of k-means clustering: The introduce delta-k means clustering, where they artifically introduce noise in the label assignment and centroid assignment step (with the motivation that quantum is noisy, which is vague in my opinion). In the delta-k clustering step, instead of choosing the closest centroid to a given point, the algorithm might choose any one of the clusters which is delta-close to it. Additionally, in calculating the centroid, any one of the centroids delta-close to the calculated "optimal centroid" is assigned in the steps of the algorithm. I find these two noisy steps not-so-necessary. In particular, why couldn't just fix delta to be zero and consider a quantum improvement to the standard k-means clustering problem? Is this noise-introducing step really necessary? Final Evaluation: Pros and Cons of this paper: Pros: I think the problem they study is definitely interesting. k-means clustering is definitely a problem that is in the core of classical ML and it deserves attention in the quantum community. Having said that, this submission lacks in various aspects, which I mention below. Cons: There are a number of points which seem missing in this paper. 1) First and foremost, given they want to claim quantum offers an advantage for k-means clustering, I would *strongly* encourage the authors to mention the classical run time of standard clustering algorithms, as well as in the noisy case. 2) They make the assumption that the dataset is well-structured, in this case, does this easen the classical run-time analysis as well? What is the complexity under these assumptions? Recently in quantum machine learning, papers which haven't proven classical lower bounds, in the case of specialized assumptions under which quantum seems to offer an advantage, have been shown to be over-promised. 3) There is a paper by Wiebe, Svore, Kapoor on supervised clustering. On a quick glance, it seems like the techniques used by Wiebe et al.'s paper and this paper are very similar (especially the use of swap test, minimum finding algorithm, "tomography", distance estimation, etc.) and they are solving similar problems. So, how is the algorithm in this paper better than the one in Wiebe et al.s paper which seems to have appeared over 5 years back? 4) Finally, given their submission is to a classical ML conference, I would have liked to see a bit more motivation as to why quantum is important for this problem? Additionally, they seem to consider variants of k-means such as noisy k-means and so on and state these new problems are quantum-amenable, but again it seems like a roundabout way to propose a quantum speed-up for the natural k-means problem. So why consider these variants? Overall, I believe this problem is interesting and deserves attention, but I find this submission lacking in various aspects and wouldn't push for a strong accept. I would accept this paper if only there is space for more quantum talks. More comments on the paper: 1) In lines 66-70, is this practically motivated? In the sense, when implementing practical algorithms for k-means clustering is the noisy step part of the algorithms? 2) I would like more discussion on the QRAM aspect on line 77, are you assuming the same datastructures in ref[20] in your paper as well? If so, what are they, briefly mention them. 3) The scaling of the algorithms in terms of result 1 and 2 are very hairy. I find no intuition as to how one should view kappa(V) and mu(V). I would like some discussion around the results to at least suggest to a reader, how one should view these parameters in terms of datasets. Additionally, I would *strongly* encourage the authors to mention classical upper/lower bounds for these problems in terms of all the parameters, this at least aids the reader in understanding results 1 and 2. 4) In definition 1, do these assumptions also make classical algorithms efficient? 5) Lines 201-207 are slightly confusing, what do you mean measure? It feels like if you choose to measure for the coupon-collector calculation, don't you lose the state each time and that might give a O(k) overhead.

Reviewer 2


		
Response to rebuttal: I think the rebuttal is well developed and answers most of my questions. I raise some questions in the review because I am not aware of some constraints within the quantum computing framework. The idea proposed in this paper seems to have broad applications in some other EM algorithms, which have high dependency on the data size at each step. But I still hope the authors can state q-means++ more clearly in the formal version. ================================= There has been a lot of interest recently in quantum computing; however, few has used it to solve clustering problems. This paper investigates the problem of applying quantum computing to solve the classic k-means problem, which is known to be NP-hard in most cases. The authors propose a novel quantum algorithm called q-means, and they provide time complexity of their algorithm at each iteration. Since neither quantum simulators nor quantum computers large enough are currently available to test q-means, they argue that a robust delta-k-means algorithm should have similar performance as their algorithm. Some numerical results are presented to show that delta-k-means can achieve almost the same accuracy as plain vanilla k-means. The main strength of the paper lies in the improvement of time complexity per iteration, reducing the linear dependency on data size to poly-log by using quantum information. This is a novel idea and can be used for other clustering algorithms. This paper is well organized except for section 2. Maybe moving the algorithmic scheme in supplementary to paper can help understanding. Some important notations are also omitted.

Reviewer 3


		
Response to rebuttal: I read the authors' response and I don't have any further comments. -------------------------- ORIGINALITY =========== The proposed q-means algorithm follows along the lines of the classical k-means algorithm but takes advantage of recent advances in quantum computing (such as vector tomography and quantum linear algebra). Its originality lies in putting all the pieces together and giving a rigorous analysis of the resulting algorithm. QUALITY & CLARITY ================= The paper is of a very high quality, and the authors take great care in giving rigorous analyses of all the steps and of performing experiments in order to validate some of the theoretical claims. The writing quality is uneven. In general, the supplementary material is in a much better shape than the 8-page main submission. For instance, in the main text, there's no comparison at all to previous work, or a section which describes the novelty that allows them to beat previous records. SIGNIFICANCE ============ Given the excitement around quantum algorithms for machine learning, this paper is of high significance and should be of interest to many NeurIPS attendees. Also, the quantum parts of the algorithm are nicely packaged into separate modules, so it gives a very good template of how quantum tools can accelerate classical algorithms (provided access to quantum data of course).