{"title": "Fast, Sample-Efficient Algorithms for Structured Phase Retrieval", "book": "Advances in Neural Information Processing Systems", "page_first": 4917, "page_last": 4927, "abstract": "We consider the problem of recovering a signal x in R^n, from magnitude-only measurements, y_i = |a_i^T x| for i={1,2...m}. Also known as the phase retrieval problem, it is a fundamental challenge in nano-, bio- and astronomical imaging systems, astronomical imaging, and speech processing. The problem is ill-posed, and therefore additional assumptions on the signal and/or the measurements are necessary. In this paper, we first study the case where the underlying signal x is s-sparse. We develop a novel recovery algorithm that we call Compressive Phase Retrieval with Alternating Minimization, or CoPRAM. Our algorithm is simple and can be obtained via a natural combination of the classical alternating minimization approach for phase retrieval, with the CoSaMP algorithm for sparse recovery. Despite its simplicity, we prove that our algorithm achieves a sample complexity of O(s^2 log n) with Gaussian samples, which matches the best known existing results. It also demonstrates linear convergence in theory and practice and requires no extra tuning parameters other than the signal sparsity level s. We then consider the case where the underlying signal x arises from to structured sparsity models. We specifically examine the case of block-sparse signals with uniform block size of b and block sparsity k=s/b. For this problem, we design a recovery algorithm that we call Block CoPRAM that further reduces the sample complexity to O(ks log n). For sufficiently large block lengths of b=Theta(s), this bound equates to O(s log n). To our knowledge, this constitutes the first end-to-end linearly convergent family of algorithms for phase retrieval where the Gaussian sample complexity has a sub-quadratic dependence on the sparsity level of the signal.", "full_text": "Fast, Sample-Ef\ufb01cient Algorithms for\n\nStructured Phase Retrieval\n\nElectrical and Computer Engineering\n\nElectrical and Computer Engineering\n\nGauri jagatap\n\nIowa State University\n\nChinmay Hegde\n\nIowa State University\n\nAbstract\n\n(cid:3)\n\nof O(cid:2)\n\nWe consider the problem of recovering a signal x\u2217 \u2208 R\nn, from magnitude-only\nmeasurements, yi = |(cid:3)ai, x\u2217(cid:4)| for i = {1, 2, . . . , m}. Also known as the phase\nretrieval problem, it is a fundamental challenge in nano-, bio- and astronomical\nimaging systems, and speech processing. The problem is ill-posed, and therefore\nadditional assumptions on the signal and/or the measurements are necessary.\nIn this paper, we \ufb01rst study the case where the underlying signal x\u2217 is s-sparse.\nWe develop a novel recovery algorithm that we call Compressive Phase Retrieval\nwith Alternating Minimization, or CoPRAM. Our algorithm is simple and can\nbe obtained via a natural combination of the classical alternating minimization\napproach for phase retrieval, with the CoSaMP algorithm for sparse recovery.\nDespite its simplicity, we prove that our algorithm achieves a sample complexity\nwith Gaussian samples, which matches the best known existing\nresults. It also demonstrates linear convergence in theory and practice and requires\nno extra tuning parameters other than the signal sparsity level s.\nWe then consider the case where the underlying signal x\u2217 arises from structured\nsparsity models. We speci\ufb01cally examine the case of block-sparse signals with\nuniform block size of b and block sparsity k = s/b. For this problem, we design\na recovery algorithm that we call Block CoPRAM that further reduces the sample\ncomplexity to O (ks log n). For suf\ufb01ciently large block lengths of b = \u0398(s), this\nbound equates to O (s log n). To our knowledge, this constitutes the \ufb01rst end-to-\nend linearly convergent family of algorithms for phase retrieval where the Gaussian\nsample complexity has a sub-quadratic dependence on the sparsity level of the\nsignal.\n\ns2 log n\n\n1\n\nIntroduction\n\n1.1 Motivation\nIn this paper, we consider the problem of recovering a signal x\u2217 \u2208 R\nmagnitude-only linear measurements. That is, for sampling vector ai \u2208 R\n\nn from (possibly noisy)\nn, if\n\nyi = |(cid:3)ai, x\u2217(cid:4)| ,\n\nfor i = 1, . . . , m,\n\n(1)\n(cid:3).\nthen the task is to recover x\u2217 using the measurements y and the sampling matrix A = [a1 . . . am]\nProblems of this kind arise in numerous scenarios in machine learning, imaging, and statistics.\nFor example, the classical problem of phase retrieval is encountered in imaging systems such as\ndiffraction imaging, X-ray crystallography, ptychography, and astronomy [1, 2, 3, 4, 5]. For such\nimaging systems, the optical sensors used for light acquisition can only record the intensity of the\nlight waves but not their phase. In terms of our setup, the vector x\u2217 corresponds to an image (with\na resolution of n pixels) and the measurements correspond to the magnitudes of its 2D Fourier\ncoef\ufb01cients. The goal is to stably recover the image x\u2217 using as few observations m as possible.\n31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.\n\n\frecovery, and streaming algorithms [15, 16, 17], and it has been established that only O(cid:2)\n\nDespite the prevalence of several heuristic approaches [6, 7, 8, 9], it is generally accepted that (1) is a\nchallenging nonlinear, ill-posed inverse problem in theory and practice. For generic ai and x\u2217, one\ncan show that (1) is NP-hard by reduction from well-known combinatorial problems [10]. Therefore,\nadditional assumptions on the signal x\u2217 and/or the measurement vectors ai are necessary.\nA recent line of breakthrough results [11, 12] have provided ef\ufb01cient algorithms for the case where the\nmeasurement vectors arise from certain multi-variate probability distributions. The seminal paper by\nNetrapalli et al. [13] provides the \ufb01rst rigorous justi\ufb01cation of classical heuristics for phase retrieval\nbased on alternating minimization. However, all these newer results require an \u201covercomplete\" set\nof observations, i.e., the number of observations m exceeds the problem dimension n (m = O (n)\nbeing the tightest evaluation of this bound [14]). This requirement can pose severe limitations on\ncomputation and storage, particularly when m and n are very large.\nOne way to mitigate the dimensionality issue is to use the fact that in practical applications, x\u2217 often\nobeys certain low-dimensional structural assumptions. For example, in imaging applications x\u2217 is\n(cid:3)\ns-sparse in some known basis, such as identity or wavelet. For transparency, we assume the canonical\nbasis for sparsity throughout this paper. Similar structural assumptions form the core of sparse\ns log n\nsamples are necessary for stable recovery of x\u2217, which is information-theoretically optimal [18].\ns\nSeveral approaches for solving the sparsity-constrained version of (1) have been proposed, including\nalternating minimization [13], methods based on convex relaxation [19, 20, 21], and iterative thresh-\nolding [22, 23]. Curiously, all of the above techniques incur a sample complexity of \u03a9(s2 log n) for\n1.\nMoreover, most of these algorithms have quadratic (or worse) running time [19, 22], stringent as-\nsumptions on the nonzero signal coef\ufb01cients [13, 23], and require several tuning parameters [22, 23].\nFinally, for speci\ufb01c applications, more re\ufb01ned structural assumptions on x\u2217 are applicable. For\nexample, point sources in astronomical images often produce clusters of nonzero pixels in a given\nimage, while wavelet coef\ufb01cients of natural images often can be organized as connected sub-trees.\nAlgorithms that leverage such structured sparsity assumptions have been shown to achieve con-\nsiderably improved sample-complexity in statistical learning and sparse recovery problems using\nblock-sparsity [30, 31, 32, 33], tree sparsity [34, 30, 35, 36], clusters [37, 31, 38], and graph mod-\nels [39, 38, 40]. However, these models have not been understood in the context of phase retrieval.\n\nstable recovery, which is quadratically worse than the information-theoretic limit [18] of O(cid:2)\n\ns log n\ns\n\n(cid:3)\n\n1.2 Our contributions\n\nThe contributions in this paper are two-fold. First, we provide a new, \ufb02exible algorithm for sparse\nphase retrieval that matches state of the art methods both from a statistical as well as computational\nviewpoint. Next, we show that it is possible to extend this algorithm to the case where the signal\nis block-sparse, thereby further lowering the sample complexity of stable recovery. Our work can\nbe viewed as a \ufb01rst step towards a general framework for phase retrieval of structured signals from\nGaussian samples.\nSparse phase retrieval. We \ufb01rst study the case where the underlying signal x\u2217 is s-sparse. We\ndevelop a novel recovery algorithm that we call Compressive Phase Retrieval with Alternating\nMinimization, or CoPRAM2. Our algorithm is simple and can be obtained via a natural combination\nof the classical alternating minimization approach for phase retrieval with the CoSaMP [41] algorithm\nfor sparse recovery (CoSAMP also naturally extends to several sparsity models [30]). We prove that\nwith Gaussian measurement vectors ai\nin order to achieve linear convergence, matching the best among all existing results. An appealing\nfeature of our algorithm is that it requires no extra a priori information other than the signal sparsity\nlevel s, and no assumptions on the nonzero signal coef\ufb01cients. To our knowledge, this is the \ufb01rst\nalgorithm for sparse phase retrieval that simultaneously achieves all of the above properties. We use\nCoPRAM as the basis to formulate a block-sparse extension (Block CoPRAM).\nBlock-sparse phase retrieval. We consider the case where the underlying signal x\u2217 arises from\nstructured sparsity models, speci\ufb01cally block-sparse signals with uniform block size b (i.e., s non-\nzeros equally grouped into k = s/b blocks). For this problem, we design a recovery algorithm that we\n\nour algorithm achieves a sample complexity of O(cid:2)\n\ns2 log n\n\n(cid:3)\n\n1Exceptions to this rule are [24, 25, 26, 27, 28, 29] where very carefully crafted measurements ai are used.\n2We use the terms sparse phase retrieval and compressive phase retrieval interchangeably.\n\n2\n\n\fTable 1: Comparison of (Gaussian sample) sparse phase retrieval algorithms. Here, n, s, k = s/b\ndenote signal length, sparsity, and block-sparsity. O\u0001 (\u00b7) hides polylogarithmic dependence on 1\n\u0001 .\nAlgorithm\nAltMinSparse\n(cid:2)1-PhaseLift\n\nSample complexity\nO\u0001\n\nRunning time Assumptions\n\n(cid:6)x\u2217(cid:6)2\n\n(cid:5)\n\n(cid:3)\n\ns\n\n(cid:2)\nO(cid:2)\nThresholded WF O(cid:2)\nO(cid:2)\nO(cid:2)\n\n(cid:3)\ns2 log n + s2 log3 s\n(cid:3)\ns2 log n\n(cid:3)\ns2 log n\n(cid:3)\ns2 log n\nSPARTA\ns2 log n\nCoPRAM\nBlock CoPRAM O (ks log n)\n\n(cid:3) O\u0001\n(cid:2)\n(cid:4)\nmin \u2248 c\u221a\nx\u2217\n(cid:2)\nO\n(cid:2)\nnone\nO\u0001\nnone\n(cid:2)\nO\u0001\nmin \u2248 c\u221a\nx\u2217\nO\u0001\nnone\nO\u0001 (ksn log n) none\n\ns2n log n\nn3\n\u00012\nn2 log n\ns2n log n\ns2n log n\n\n(cid:3)\n(cid:3)\n\n(cid:3)\n\nParameters\nnone\nnone\nstep \u03bc, thresholds \u03b1, \u03b2\nstep \u03bc, threshold \u03b3\nnone\nnone\n\n(cid:6)x\u2217(cid:6)2\n\ns\n\ncall Block CoPRAM. We analyze this algorithm and show that leveraging block-structure reduces the\nsample complexity for stable recovery to O (ks log n). For suf\ufb01ciently large block lengths b = \u0398(s),\nthis bound equates to O (s log n). To our knowledge, this constitutes the \ufb01rst phase retrieval algorithm\nwhere the Gaussian sample complexity has a sub-quadratic dependence on the sparsity s of the signal.\nA comparative description of the performance of our algorithms is presented in Table 1.\n\n1.3 Techniques\nSparse phase retrieval. Our proposed algorithm, CoPRAM, is conceptually very simple. It integrates\nexisting approaches in stable sparse recovery (speci\ufb01cally, the CoSaMP algorithm [41]) with the\nalternating minimization approach for phase retrieval proposed in [13].\nA similar integration of sparse recovery with alternating minimization was also introduced in [13];\nhowever, their approach only succeeds when the true support of the underlying signal is accurately\nidenti\ufb01ed during initialization, which can be unrealistic. Instead, CoPRAM permits the support of the\nestimate to evolve across iterations, and therefore can iteratively \u201ccorrect\" for any errors made during\nthe initialization. Moreover, their analysis requires using fresh samples for every new update of the\nestimate, while ours succeeds in the (more practical) setting of using all the available samples.\nOur \ufb01rst challenge is to identify a good initial guess of the signal. As is the case with most non-\nconvex techniques, CoPRAM requires an initial estimate x0 that is close to the true signal x\u2217. The\nbasic idea is to identify \u201cimportant\" co-ordinates by constructing suitable biased estimators of each\nsignal coef\ufb01cient, followed by a speci\ufb01c eigendecomposition. The initialization in CoPRAM is far\nsimpler than the approaches in [22, 23]; requiring no pre-processing of the measurements and or\ntuning parameters other than the sparsity level s. A drawback of the theoretical results of [23] is\nthat they impose a requirement on signal coef\ufb01cients: minj\u2208S |x\u2217\ns. However, this\nassumption disobeys the power-law decay observed in real world signals. Our approach also differs\nfrom [22], where they estimate an initial support based on a parameter-dependent threshold value.\nO(cid:2)\nOur analysis removes these requirements; we show that a coarse estimate of the support, coupled\nwith the spectral technique in [22, 23] gives us a suitable initialization. A sample complexity of\nis incurred for achieving this estimate, matching the best available previous methods.\nOur next challenge is to show that given a good initial guess, alternatingly estimating the phases and\nnon-zero coef\ufb01cients (using CoSaMP) gives a rapid convergence to the desired solution. To this end,\nwe use the analysis of CoSaMP [41] and leverage a recent result by [42], to show per step decrease in\nthe signal estimation error using the generic chaining technique of [43, 44]. In particular, we show\nthat any \u201cphase errors\" made in the initialization, can be suitably controlled across different estimates.\nBlock-sparse phase retrieval. We use CoPRAM to establish its extension Block CoPRAM, which is\na novel phase retrieval strategy for block sparse signals from Gaussian measurements. Again, the\nalgorithm is based on a suitable initialization followed by an alternating minimization procedure,\nmirroring the steps in CoPRAM. To our knowledge, this is the \ufb01rst result for phase retrieval under\nmore re\ufb01ned structured sparsity assumptions on the signal.\nAs above, the \ufb01rst stage consists of identifying a good initial guess of the solution. We proceed as in\nCoPRAM, isolating blocks of nonzero coordinates, by constructing a biased estimator for the \u201cmass\"\nof each block. We prove that a good initialization can be achieved using this procedure using only\nO (ks log n) measurements. When the block-size is large enough (b = \u0398(s)), the sample complexity\nof the initialization is sub-quadratic in the sparsity level s and only a logarithmic factor above the\n\nj| = C (cid:6)x\u2217(cid:6)2 /\n\ns2 log n\n\n\u221a\n\n(cid:3)\n\n3\n\n\finformation-theoretic limit O (s) [30]. In the second stage, we demonstrate a rapid descent to the\ndesired solution. To this end, we replace the CoSaMP sub-routine in CoPRAM with the model-based\nCoSaMP algorithm of [30], specialized to block-sparse recovery. The analysis proceeds analogously\nas above. To our knowledge, this constitutes the \ufb01rst end-to-end algorithm for phase retrieval (from\nGaussian samples) that demonstrates a sub-quadratic dependence on the sparsity level of the signal.\n\n1.4 Prior work\n\nThe phase retrieval problem has received signi\ufb01cant attention in the past few years. Convex methodolo-\ngies to solve the problem in the lifted framework include PhaseLift and its variations [11, 45, 46, 47].\nMost of these approaches suffer severely in terms of computational complexity. PhaseMax, produces\na convex relaxation of the phase retrieval problem similar to basis pursuit [48]; however it is not em-\nperically competitive. Non-convex algorithms typically rely on \ufb01nding a good initial point, followed\nby minimizing a quadratic (Wirtinger Flow [12, 14, 49]) or moduli ( [50, 51]) measurement loss\nfunction. Arbitrary initializations have been studied in a polynomial-time trust-region setting in [52].\nSome of the convex approaches in sparse phase retrieval include [19, 53], which uses a combination\nof trace-norm and (cid:2)-norm relaxation.Constrained sensing vectors have been used [25] at optimal\n. Fourier measurements have been studied extensively in the convex\n[54] and non-convex [55] settings. More non-convex approaches for sparse phase retrieval include\n\nsample complexity O(cid:2)\n[13, 23, 22] which achieve Gaussian sample complexities of O(cid:2)\n\ns2 log n\n\ns log n\ns\n\n(cid:3)\n\n(cid:3)\n\n.\n\nStructured sparsity models such as groups, blocks, clusters, and trees can be used to model real-world\nsignals.Applications of such models have been developed for sparse recovery [30, 33, 39, 38, 40, 56,\n34, 35, 36] as well as in high-dimensional optimization and statistical learning [32, 31]. However, to\nthe best of our knowledge, there have been no rigorous results that explore the impact of structured\nsparsity models for the phase retrieval problem.\n\n2 Paper organization and notation\n\nThe remainder of the paper is organized as follows. In Sections 3 and 4, we introduce the CoPRAM\nand Block CoPRAM algorithms respectively, and provide a theoretical analysis of their statistical\nperformance. In Section 5 we present numerical experiments for our algorithms.\nStandard notation for matrices (capital, bold: A, P, etc.), vectors (small, bold: x, y, etc.) and scalars\n( \u03b1, c etc.) hold. Matrix and vector transposes are represented using (cid:8) (eg. x(cid:3) and A(cid:3)) respectively.\nThe diagonal matrix form of a column vector y \u2208 R\nm\u00d7m. Operator\ncard(S) represents cardinality of S. Elements of a are distributed according to the zero-mean\nstandard normal distribution N (0, 1). The phase is denoted using sign (y) \u2261 y/|y| for y \u2208 R\nm, and\ndist (x1, x2) \u2261 min((cid:6)x1 \u2212 x2(cid:6)2,(cid:6)x1 + x2(cid:6)2) for every x1, x2 \u2208 R\nn is used to denote \u201cdistance\",\nupto a global phase factor (both x = x\u2217,\u2212x\u2217 satisfy y = |Ax|). The projection of vector x \u2208 R\nn\nonto a set of coordinates S is represented as xS \u2208 R\nn, xS j = xj for j \u2208 S, and 0 elsewhere.\nProjection of matrix M \u2208 R\nn\u00d7n, MS ij = Mij for i, j \u2208 S, and 0 elsewhere.\nFor faster algorithmic implementations, MS can be assumed to be a truncated matrix MS \u2208 R\ns\u00d7s,\ndiscarding all row and column elements corresponding to Sc. The element-wise inner product of\ntwo vectors y1 and y2 \u2208 R\nm is represented as y1 \u25e6 y2. Unspeci\ufb01ed large and small constants are\nrepresented by C and \u03b4 respectively. The abbreviation w.h.p. denotes \u201cwith high probability\".\n\nm is represented as diag(y) \u2208 R\n\nn\u00d7n onto S is MS \u2208 R\n\n3 Compressive phase retrieval\n\nIn this section, we propose a new algorithm for solving the sparse phase retrieval problem and\nanalyze its performance. Later, we will show how to extend this algorithm to the case of more re\ufb01ned\nstructural assumptions about the underlying sparse signal.\nWe \ufb01rst provide a brief outline of our proposed algorithm. It is clear that the sparse recovery version\nof (1) is highly non-convex, and possibly has multiple local minima[22]. Therefore, as is typical\nin modern non-convex methods [13, 23, 57] we use an spectral technique to obtain a good initial\nestimate. Our technique is a modi\ufb01cation of the initialization stages in [22, 23], but requires no tuning\nparameters or assumptions on signal coef\ufb01cients, except for the sparsity s. Once an appropriate initial\n\n4\n\n\f(cid:6)m\n\nAlgorithm 1 CoPRAM: Initialization.\ninput A, y, s.\n\n(cid:6)m\nCompute signal power: \u03c62 = 1\ni=1 y2\ni .\nm\nCompute signal marginals: Mjj = 1\ni=1 y2\nm\nSet \u02c6S \u2190 j\u2019s corresponding to top-s Mjj\u2019s.\nSet v1 \u2190 top singular vector of M \u02c6S = 1\ni=1 y2\nCompute x0 \u2190 \u03c6v, where v \u2190 v1 for \u02c6S and 0 \u2208 R\n\n(cid:6)m\n\ni a2\nij\n\nm\n\n\u2200j.\n\noutput x0.\n\n\u2208 R\n\ns\u00d7s.\n\n(cid:3)\n\u02c6S\n\ni ai \u02c6Sai\nn\u2212s for \u02c6Sc.\n\nAlgorithm 2 CoPRAM: Descent.\ninput A, y, x0, s, t0.\n\nInitialize x0 according to Algorithm 1.\nfor t = 0,\u00b7\u00b7\u00b7 , t0 \u2212 1 do\nPt+1 \u2190 diag (sign (Axt)),\nxt+1 \u2190 COSAMP( 1\u221a\nA, 1\u221a\n\nm\n\nm\n\nPt+1y,s,xt).\n\nend for\n\noutput z \u2190 xt0.\n\nestimate is chosen, we then show that a simple alternating-minimization algorithm, based on the\nalgorithm in [13] will converge rapidly to the underlying true signal. We call our overall algorithm\nCompressive Phase Retrieval with Alternating Minimization (CoPRAM) which is divided into two\nstages: Initialization (Algorithm 1) and Descent (Algorithm 2).\n\n3.1\n\nInitialization\n\nThe high level idea of the \ufb01rst stage of CoPRAM is as follows; we use measurements yi to construct\na biased estimator, marginal Mjj corresponding to the jth signal coef\ufb01cient and given by:\n\ni a2\ny2\nij,\n\nfor\n\nj \u2208 {1, . . . n}.\n\n(2)\n\nm(cid:7)\n\ni=1\n\nMjj =\n\n1\nm\n\nThe marginals themselves do not directly produce signal coef\ufb01cients, but the \u201cweight\" of each\nmarginal identi\ufb01es the true signal support. Then, a spectral technique based on [13, 23, 22] constructs\nan initial estimate x0. To accurately estimate support, earlier works [13, 23] assume that the\nmagnitudes of the nonzero signal coef\ufb01cients are all suf\ufb01ciently large, i.e., \u03a9 ((cid:6)x\u2217(cid:6)2 /\ns), which\ncan be unrealistic, violating the power-decay law. Our analysis resolves this issue by relaxing the\nrequirement of accurately identifying the support, without any tuning parameters, unlike [22]. We\nclaim that a coarse estimate of the support is good enough, since the errors would correspond to small\ncoef\ufb01cients. Such \u201cnoise\" in the signal estimate can be controlled with a suf\ufb01cient number of samples.\nInstead, we show that a simple pruning step that rejects the smallest n \u2212 k coordinates, followed\nby the spectral procedure of [23], gives us the initialization that we need. Concretely, if elements\nof A are distributed as per standard normal distribution N (0, 1), a weighted correlation matrix\nM = 1\ni , can be constructed, having diagonal elements Mjj. Then, the diagonal\nm\nelements of this expectation matrix E [M] are given by:\n\n(cid:6)m\n\ni aia(cid:3)\n\ni=1 y2\n\n\u221a\n\nE [Mjj] = (cid:6)x\u2217(cid:6)2 + 2x\u22172\n\n(3)\nexhibiting a clear separation when analyzed for j \u2208 S and j \u2208 Sc. We can hence claim, that signal\nmarginals at locations on the diagonal of M corresponding to j \u2208 S are larger, on an average, than\nthose for j \u2208 Sc. Based on this, we evaluate the diagonal elements Mjj and reject n \u2212 k coordinates\ncorresponding to the smallest marginals obtain a crude approximation of signal support \u02c6S. Using a\nspectral technique, we \ufb01nd an initial vector in the reduced space, which is close to the true signal, if\n\nj\n\n(cid:3)\n\nm = O(cid:2)\n\ns2 log n\n\n.\n\nTheorem 3.1. The initial estimate x0, which is the output of Algorithm 1, is a small constant distance\n\u03b40 away from the true s-sparse signal x\u2217\ndist\n\nx0, x\u2217(cid:3) \u2264 \u03b40 (cid:6)x\u2217(cid:6)2 ,\n\n, i.e.,\n\n(cid:2)\n\n5\n\n\f\u221a\n\nwhere 0 < \u03b40 < 1, as long as the number of (Gaussian) measurements satisfy, m \u2265 Cs2 log mn,\nwith probability greater than 1 \u2212 8\nm .\nThis theorem is proved via Lemmas C.1 through C.4 (Appendix C), and the argument proceeds as\nfollows. We evaluate the marginals of the signal Mjj, in broadly two cases: j \u2208 S and j \u2208 Sc.\nThe key idea is to establish one of the following: (1) If the signal coef\ufb01cients obey minj\u2208S |x\u2217\nj| =\nC (cid:6)x\u2217(cid:6)2 /\ns, then, w.h.p. there exists a clear separation between the marginals Mjj for j \u2208 S\nand j \u2208 Sc. Then Algorithm 1 picks up the correct support (i.e. \u02c6S = S); (2) if there is no\nsuch restriction, even then the support picked up in Algorithm 1, \u02c6S, contains a bulk of the correct\nsupport S. The incorrect elements of \u02c6S induce negligible error in estimating the intial vector. These\n(cid:8)\napproaches are illustrated in Figures 4 and 5 in Appendix C. The marginals Mjj < \u0398, w.h.p.,\nfor j \u2208 Sc and Mjj > \u0398, j \u2208 S+, where S+ is a big chunk of the picked support S+ \u2286 \u02c6S,\nS+ = {j \u2208 S : x\u2217\n(log mn)/m(cid:6)x\u2217(cid:6)2} are separated by threshold \u0398 (Lemmas C.1 and\nC.2). The identi\ufb01cation of the support \u02c6S (which provably contains a signi\ufb01cant chunk S+ of the true\nsupport S) is used to construct the truncated correlation matrix M \u02c6S. The top singular vector of this\nmatrix M \u02c6S, gives us a good initial estimate x0.\nThe \ufb01nal step of Algorithm 1 requires a scaling of the normalized vector v1 by a factor \u03c6, which\nconserves the power in the signal (Lemma F.1 in Appendix F), whp, where \u03c62 which is de\ufb01ned as\n\n2 \u2265 15\n\nj\n\nm(cid:7)\n\ni=1\n\n\u03c62 =\n\n1\nm\n\ny2\ni .\n\n(4)\n\n3.2 Descent to optimal solution\nAfter obtaining an initial estimate x0, we construct a method to accurately recover x\u2217. For this, we\nadapt the alternating minimization approach from [13]. The observation model (1) can be restated as:\n\nsign ((cid:3)ai, x\u2217(cid:4)) \u25e6 yi = (cid:3)ai, x\u2217(cid:4)\n\ni = {1, 2, . . . m}.\n\nfor\n\nWe introduce the phase vector p \u2208 R\nm containing (unknown) signs of measurements, i.e., pi =\nsign ((cid:3)ai, x(cid:4)) , \u2200i and phase matrix P = diag (p). Then our measurement model gets modi\ufb01ed as\nP\u2217y = Ax\u2217, where P\u2217 is the true phase matrix. We then minimize the loss function composed of\nvariables x and P,\n\n(cid:6)x(cid:6)0\u2264s,P\u2208P\n\n(5)\nmin\nm\u00d7m with diagonal entries constrained to be in {\u22121, 1}.\nHere P is a set of all diagonal matrices \u2208 R\nHence the problem stated above is not convex. Instead, we alternate between estimating P and x\nas follows: (1) if we \ufb01x the signal estimate x, then the minimizer P is given in closed form as\nP = diag (sign (Ax)); we call this the phase estimation step; (2) if we \ufb01x the phase matrix P, the\nsparse vector x can be obtained by solving the signal estimation step:\n\n(cid:6)Ax \u2212 Py(cid:6)2 .\n\n(cid:6)Ax \u2212 Py(cid:6)2.\n\nmin\n\nx,(cid:6)x(cid:6)0\u2264s\n\n(6)\n\nWe employ the CoSaMP [41] algorithm to (approximately) solve the non-convex problem (6). We do\nnot need to explicitly obtain the minimizer for (6) but only show a suf\ufb01cient descent criterion, which\nwe achieve by performing a careful analysis of the CoSaMP algorithm. For analysis reasons, we\nrequire that the entries of the input sensing matrix are distributed according to N (0, 1/\nm). This\ncan be achieved by scaling down the inputs to CoSaMP: At, Pt+1y by a factor of\nm (see x-update\nstep of Algorithm 2). Another distinction is that we use a \u201cwarm start\" CoSaMP routine for each\niteration where the initial guess of the solution to (6) is given by the current signal estimate.\nWe now analyze our proposed descent scheme. We obtain the following theoretical result:\nTheorem 3.2. Given an initialization x0 satisfying Algorithm 1, if we have number of (Gaussian)\nmeasurements m \u2265 Cs log n\n\ns , then the iterates of Algorithm 2 satisfy:\n\n\u221a\n\n\u221a\n\n(cid:2)\n\nxt+1, x\u2217(cid:3) \u2264 \u03c10dist\n\n(cid:2)\n\nxt, x\u2217(cid:3)\n\n.\n\ndist\n\nwhere 0 < \u03c10 < 1 is a constant, with probability greater than 1 \u2212 e\u2212\u03b3m, for positive constant \u03b3.\nThe proof of this theorem can be found in Appendix E.\n\n(7)\n\n6\n\n\f4 Block-sparse phase retrieval\n\nThe analysis of the proofs mentioned so far, as well as experimental results suggest that we can reduce\nsample complexity for successful sparse phase retrieval by exploiting further structural information\nabout the signal. Block-sparse signals x\u2217, can be said to be following a sparsity model Ms,b, where\nMs,b describes the set of all block-sparse signals with s non-zeros being grouped into uniform pre-\nb . We use the index set jb = {1, 2 . . . k},\ndetermined blocks of size b, such that block-sparsity k = s\nto denote block-indices. We introduce the concept of block marginals, a block-analogue to signal\nmarginals, which can be analyzed to crudely estimate the block support of the signal in consideration.\nWe use this formulation, along with the alternating minimization approach that uses model-based\nCoSaMP [30] to descend to the optimal solution.\n\n4.1\n\nInitialization\n\nAnalogous to the concept of marginals de\ufb01ned above, we introduce block marginals Mjbjb, where\nMjj is de\ufb01ned as in (2). For block index jb, we de\ufb01ne:\n\nMjbjb =\n\nM 2\njj,\n\n(8)\n\n(cid:9)(cid:7)\n\nj\u2208jb\n\nto develop the initialization stage of our Block CoPRAM algorithm. Similar to the proof approach\nof CoPRAM, we evaluate the block marginals, and use the top-k such marginals to obtain a crude\napproximation \u02c6Sb of the true block support Sb. This support can be used to construct the truncated\ncorrelation matrix M \u02c6Sb. The top singular vector of this matrix M \u02c6Sb gives a good initial estimate x0\n(Algorithm 3, Appendix A) for the Block CoPRAM algorithm (Algorithm 4, Appendix A). Through\nthe evaluation of block marginals, we proceed to prove that the sample complexity required for a\ngood initial estimate (and subsequently, successful signal recovery of block sparse signals) is given\nby O (ks log n). This essentially reduces the sample complexity of signal recovery by a factor equal\nto the block-length b over the sample complexity required for standard sparse phase retrieval.\n(cid:2)\nTheorem 4.1. The initial vector x0, which is the output of Algorithm 3, is a small constant distance\n\u03b4b away from the true signal x\u2217 \u2208 Ms,b, i.e.,\n\nx0, x\u2217(cid:3) \u2264 \u03b4b (cid:6)x\u2217(cid:6)2 ,\n\ndist\n\nwhere 0 < \u03b4b < 1, as long as the number of (Gaussian) measurements satisfy m \u2265 C s2\nprobability greater than 1 \u2212 8\nm .\nThe proof can be found in Appendix D, and carries forward intuitively from the proof of the\ncompressive phase-retrieval framework.\n\nb log mn with\n\n4.2 Descent to optimal solution\n\nFor the descent of Block CoPRAM to optimal solution, the phase-estimation step is the same as that\nin CoPRAM. For the signal estimation step, we attempt to solve the same minimization as in (6),\nexcept with the additional constraint that the signal x\u2217 is block sparse,\n\nmin\nx\u2208Ms,b\n\n(9)\nwhere Ms,b describes the block sparsity model. In order to approximate the solution to (9), we use\nthe model-based CoSaMP approach of [30]. This is a straightforward specialization of the CoSaMP\nalgorithm and has been shown to achieve improved sample complexity over existing approaches for\nstandard sparse recovery.\nSimilar to Theorem 3.2 above, we obtain the following result (the proof can be found in Appendix E):\n\n(cid:6)Ax \u2212 Py(cid:6)2,\n\n(cid:2)\n\n(cid:3)\n(cid:2)\n\nTheorem 4.2. Given an initialization x0 satisfying Algorithm 3, if we have number of (Gaussian)\nmeasurements m \u2265 C\n\n, then the iterates of Algorithm 4 satisfy:\n\ns + s\n\nb log n\ns\ndist\n\nxt+1, x\u2217(cid:3) \u2264 \u03c1bdist\n\n(cid:2)\n\nxt, x\u2217(cid:3)\n\n.\n\nwhere 0 < \u03c1b < 1 is a constant, with probability greater than 1 \u2212 e\u2212\u03b3m, for positive constant \u03b3.\nThe analysis so far has been made for uniform blocks of size b. However the same algorithm can be\nextended to the case of sparse signals with non-uniform blocks or clusters (refer Appendix A).\n\n(10)\n\n7\n\n\fy\nr\ne\nv\no\nc\ne\nr\n\nf\no\n\ny\nt\ni\nl\ni\nb\na\nb\no\nr\nP\n\n1\n\n0.5\n\n0\n\n500 1,000 1,500 2,000\nNumber of samples m\n(a) Sparsity s = 20\n\nCoPRAM\n\nBlock\n\nCoPRAM\n\nThWF\nSPARTA\n\ny\nr\ne\nv\no\nc\ne\nr\n\nf\no\n\ny\nt\ni\nl\ni\nb\na\nb\no\nr\nP\n\n1\n\n0.5\n\n0\n\n500 1,000 1,500 2,000\nNumber of samples m\n\n(b) Sparsity s = 30\n\ny\nr\ne\nv\no\nc\ne\nr\n\nf\no\n\ny\nt\ni\nl\ni\nb\na\nb\no\nr\nP\n\n1\n\n0.5\n\n0\n\n0\n\n500\n\n1,000\n\n1,500\n\nNumber of samples m\n\n(c) Block CoPRAM, s = 20\n\nb = 20, k = 1\nb = 10, k = 2\nb = 5, k = 4\nb = 2, k = 10\nb = 1, k = 20\n\nFigure 1: Phase transitions for signal of length n = 3, 000, sparsity s and block length b (a) s = 20,\nb = 5, (b) s = 30, b = 5, and (c) s = 20, b = 20, 10, 5, 2, 1 (Block CoPRAM only).\n\n5 Experiments\n\nWe explore the performance of the CoPRAM and Block CoPRAM on synthetic data. All numerical\nexperiments were conducted using MATLAB 2016a on a computer with an Intel Xeon CPU at\n3.3GHz and 8GB RAM. The nonzero elements of the unit norm vector x\u2217 \u2208 R\n3000 are generated\nfrom N (0, 1). We repeated each of the experiments (\ufb01xed n, s, b, m) in Figure 1 (a) and (b), for\n50 and Figure 1 (c) for 200 independent Monte Carlo trials. For our simulations, we compared our\nalgorithms CoPRAM and Block CoPRAM with Thresholded Wirtinger \ufb02ow (Thresholded WF or\nThWF) [22] and SPARTA [23]. The parameters for these algorithms were carefully chosen as per the\ndescription in their respective papers.\nFor the \ufb01rst experiment, we generated phase transition plots by evaluating the probability of empirical\nsuccessful recovery, i.e. number of trials out of 50. The recovery probability for the four algorithms is\ndisplayed in Figure 1. It can be noted that increasing the sparsity of signal shifts the phase transitions\nto the right. However, the phase transition for Block CoPRAM has a less apparent shift (suggesting\nthat sample complexity of m has sub-quadratic dependence on s). We see that Block CoPRAM\nexhibits lowest sample complexity for the phase transitions in both cases (a) and (b) of Figure 1.\nFor the second experiment, we study the variation of phase transition with block length, for Block\nCoPRAM (Figure 1(c)). For this experiment we \ufb01xed a signal of length n = 3, 000, sparsities\ns = 20, k = 1 for a block length of b = 20. We observe that the phase transitions improve with\n10 = 2 (for large b, b \u2192 s), we observe a saturation\nincrease in block length. At block sparsity s\neffect and the regime of the experiment is very close to the information theoretic limit.\nSeveral additional phase transition diagrams can be found in Figure 2 in Appendix B. The running\ntime of our algorithms compare favorably with Thresholded WF and SPARTA (see Table 2 in\nAppendix B). We also show that Block CoPRAM is more robust to noisy Gaussian measurements, in\ncomparison to CoPRAM and SPARTA (see Figure 3 in Appendix B).\n\nb = 20\n\n8\n\n\fReferences\n[1] Y. Shechtman, Y. Eldar, O. Cohen, H. Chapman, J. Miao, and M. Segev. Phase retrieval with application to\n\noptical imaging: a contemporary overview. IEEE Sig. Proc. Mag., 32(3):87\u2013109, 2015.\n\n[2] R. Millane. Phase retrieval in crystallography and optics. JOSA A, 7(3):394\u2013411, 1990.\n\n[3] A. Maiden and J. Rodenburg. An improved ptychographical phase retrieval algorithm for diffractive\n\nimaging. Ultramicroscopy, 109(10):1256\u20131262, 2009.\n\n[4] R. Harrison. Phase problem in crystallography. JOSA a, 10(5):1046\u20131055, 1993.\n\n[5] J. Miao, T. Ishikawa, Q Shen, and T. Earnest. Extending x-ray crystallography to allow the imaging of\nnoncrystalline materials, cells, and single protein complexes. Annu. Rev. Phys. Chem., 59:387\u2013410, 2008.\n\n[6] R. Gerchberg and W. Saxton. A practical algorithm for the determination of phase from image and\n\ndiffraction plane pictures. Optik, 35(237), 1972.\n\n[7] J. Fienup. Phase retrieval algorithms: a comparison. Applied optics, 21(15):2758\u20132769, 1982.\n\n[8] S. Marchesini. Phase retrieval and saddle-point optimization. JOSA A, 24(10):3289\u20133296, 2007.\n\n[9] K. Nugent, A. Peele, H. Chapman, and A. Mancuso. Unique phase recovery for nonperiodic objects.\n\nPhysical review letters, 91(20):203902, 2003.\n\n[10] M. Fickus, D. Mixon, A. Nelson, and Y. Wang. Phase retrieval from very few measurements. Linear Alg.\n\nAppl., 449:475\u2013499, 2014.\n\n[11] E. Candes, T. Strohmer, and V. Voroninski. Phaselift: Exact and stable signal recovery from magnitude\n\nmeasurements via convex programming. Comm. Pure Appl. Math., 66(8):1241\u20131274, 2013.\n\n[12] E. Candes, X. Li, and M. Soltanolkotabi. Phase retrieval via wirtinger \ufb02ow: Theory and algorithms. IEEE\n\nTrans. Inform. Theory, 61(4):1985\u20132007, 2015.\n\n[13] P. Netrapalli, P. Jain, and S. Sanghavi. Phase retrieval using alternating minimization. In Adv. Neural Inf.\n\nProc. Sys. (NIPS), pages 2796\u20132804, 2013.\n\n[14] Y. Chen and E. Candes. Solving random quadratic systems of equations is nearly as easy as solving linear\n\nsystems. In Adv. Neural Inf. Proc. Sys. (NIPS), pages 739\u2013747, 2015.\n\n[15] E. Candes, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from highly\n\nincomplete frequency information. IEEE Trans. Inform. Theory, 52(2):489\u2013509, 2006.\n\n[16] D. Needell, J. Tropp, and R. Vershynin. Greedy signal recovery review. In Proc. Asilomar Conf. Sig. Sys.\n\nComput., pages 1048\u20131050. IEEE, 2008.\n\n[17] E. Candes, J. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate measurements.\n\nComm. Pure Appl. Math., 59(8):1207\u20131223, 2006.\n\n[18] K. Do Ba, P. Indyk, E. Price, and D. Woodruff. Lower bounds for sparse recovery. In Proc. ACM Symp.\n\nDiscrete Alg. (SODA), pages 1190\u20131197, 2010.\n\n[19] H. Ohlsson, A. Yang, R. Dong, and S. Sastry. Cprl\u2013an extension of compressive sensing to the phase\n\nretrieval problem. In Adv. Neural Inf. Proc. Sys. (NIPS), pages 1367\u20131375, 2012.\n\n[20] Y. Chen, Y. Chi, and A. Goldsmith. Exact and stable covariance estimation from quadratic sampling via\n\nconvex programming. IEEE Trans. Inform. Theory, 61(7):4034\u20134059, 2015.\n\n[21] K. Jaganathan, S. Oymak, and B. Hassibi. Sparse phase retrieval: Convex algorithms and limitations. In\n\nProc. IEEE Int. Symp. Inform. Theory (ISIT), pages 1022\u20131026. IEEE, 2013.\n\n[22] T. Cai, X. Li, and Z. Ma. Optimal rates of convergence for noisy sparse phase retrieval via thresholded\n\nwirtinger \ufb02ow. Ann. Stat., 44(5):2221\u20132251, 2016.\n\n[23] G. Wang, L. Zhang, G. Giannakis, M. Akcakaya, and J. Chen. Sparse phase retrieval via truncated\n\namplitude \ufb02ow. arXiv preprint arXiv:1611.07641, 2016.\n\n[24] M. Iwen, A. Viswanathan, and Y. Wang. Robust sparse phase retrieval made easy. Appl. Comput. Harmon.\n\nAnal., 42(1):135\u2013142, 2017.\n\n9\n\n\f[25] S. Bahmani and J. Romberg. Ef\ufb01cient compressive phase retrieval with constrained sensing vectors. In\n\nAdv. Neural Inf. Proc. Sys. (NIPS), pages 523\u2013531, 2015.\n\n[26] H. Qiao and P. Pal. Sparse phase retrieval using partial nested Fourier samplers. In Proc. IEEE Global\n\nConf. Signal and Image Processing (GlobalSIP), pages 522\u2013526. IEEE, 2015.\n\n[27] S. Cai, M. Bakshi, S. Jaggi, and M. Chen. Super: Sparse signals with unknown phases ef\ufb01ciently recovered.\n\nIn Proc. IEEE Int. Symp. Inform. Theory (ISIT), pages 2007\u20132011. IEEE, 2014.\n\n[28] D. Yin, R. Pedarsani, X. Li, and K. Ramchandran. Compressed sensing using sparse-graph codes for the\ncontinuous-alphabet setting. In Proc. Allerton Conf. on Comm., Contr., and Comp., pages 758\u2013765. IEEE,\n2016.\n\n[29] R. Pedarsani, D. Yin, K. Lee, and K. Ramchandran. Phasecode: Fast and ef\ufb01cient compressive phase\n\nretrieval based on sparse-graph codes. IEEE Trans. Inform. Theory, 2017.\n\n[30] R. Baraniuk, V. Cevher, M. Duarte, and C. Hegde. Model-based compressive sensing. IEEE Trans. Inform.\n\nTheory, 56(4):1982\u20132001, Apr. 2010.\n\n[31] J. Huang, T. Zhang, and D. Metaxas. Learning with structured sparsity. J. Machine Learning Research,\n\n12(Nov):3371\u20133412, 2011.\n\n[32] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables. J. Royal Stat.\n\nSoc. Stat. Meth., 68(1):49\u201367, 2006.\n\n[33] Y. Eldar, P. Kuppinger, and H. Bolcskei. Block-sparse signals: Uncertainty relations and ef\ufb01cient recovery.\n\nIEEE Trans. Sig. Proc., 58(6):3042\u20133054, 2010.\n\n[34] M. Duarte, C. Hegde, V. Cevher, and R. Baraniuk. Recovery of compressible signals from unions of\n\nsubspaces. In Proc. IEEE Conf. Inform. Science and Systems (CISS), March 2009.\n\n[35] C. Hegde, P. Indyk, and L. Schmidt. A fast approximation algorithm for tree-sparse recovery. In Proc.\n\nIEEE Int. Symp. Inform. Theory (ISIT), June 2014.\n\n[36] C. Hegde, P. Indyk, and L. Schmidt. Nearly linear-time model-based compressive sensing. In Proc. Intl.\n\nColloquium on Automata, Languages, and Programming (ICALP), July 2014.\n\n[37] V. Cevher, P. Indyk, C. Hegde, and R. Baraniuk. Recovery of clustered sparse signals from compressive\n\nmeasurements. In Proc. Sampling Theory and Appl. (SampTA), May 2009.\n\n[38] C. Hegde, P. Indyk, and L. Schmidt. A nearly linear-time framework for graph-structured sparsity. In Proc.\n\nInt. Conf. Machine Learning (ICML), July 2015.\n\n[39] V. Cevher, M. Duarte, C. Hegde, and R. Baraniuk. Sparse signal recovery using Markov Random Fields.\n\nIn Adv. Neural Inf. Proc. Sys. (NIPS), Dec. 2008.\n\n[40] C. Hegde, P. Indyk, and L. Schmidt. Approximation-tolerant model-based compressive sensing. In Proc.\n\nACM Symp. Discrete Alg. (SODA), Jan. 2014.\n\n[41] D. Needell and J. Tropp. Cosamp: Iterative signal recovery from incomplete and inaccurate samples. Appl.\n\nComput. Harmon. Anal., 26(3):301\u2013321, 2009.\n\n[42] M. Soltanolkotabi. Structured signal recovery from quadratic measurements: Breaking sample complexity\n\nbarriers via nonconvex optimization. arXiv preprint arXiv:1702.06175, 2017.\n\n[43] M. Talagrand. The generic chaining: upper and lower bounds of stochastic processes. Springer Science &\n\nBusiness Media, 2006.\n\n[44] S. Dirksen. Tail bounds via generic chaining. Electronic J. Probability, 20, 2015.\n\n[45] D. Gross, F. Krahmer, and R. Kueng. Improved recovery guarantees for phase retrieval from coded\n\ndiffraction patterns. Appl. Comput. Harmon. Anal., 42(1):37\u201364, 2017.\n\n[46] E. Candes, X. Li, and M. Soltanolkotabi. Phase retrieval from coded diffraction patterns. Appl. Comput.\n\nHarmon. Anal., 39(2):277\u2013299, 2015.\n\n[47] I. Waldspurger, A.d\u2019Aspremont, and S. Mallat. Phase recovery, maxcut and complex semide\ufb01nite program-\n\nming. Mathematical Programming, 149(1-2):47\u201381, 2015.\n\n10\n\n\f[48] T. Goldstein and C. Studer. Phasemax: Convex phase retrieval via basis pursuit. arXiv preprint\n\narXiv:1610.07531, 2016.\n\n[49] H. Zhang and Y. Liang. Reshaped wirtinger \ufb02ow for solving quadratic system of equations. In Adv. Neural\n\nInf. Proc. Sys. (NIPS), pages 2622\u20132630, 2016.\n\n[50] G. Wang and G. Giannakis. Solving random systems of quadratic equations via truncated generalized\n\ngradient \ufb02ow. In Adv. Neural Inf. Proc. Sys. (NIPS), pages 568\u2013576, 2016.\n\n[51] K. Wei. Solving systems of phaseless equations via kaczmarz methods: A proof of concept study. Inverse\n\nProblems, 31(12):125008, 2015.\n\n[52] J. Sun, Q. Qu, and J. Wright. A geometric analysis of phase retrieval. In Proc. IEEE Int. Symp. Inform.\n\nTheory (ISIT), pages 2379\u20132383. IEEE, 2016.\n\n[53] X. Li and V. Voroninski. Sparse signal recovery from quadratic measurements via convex programming.\n\nSIAM J. Math. Anal., 45(5):3019\u20133033, 2013.\n\n[54] K. Jaganathan, S. Oymak, and B. Hassibi. Recovery of sparse 1-d signals from the magnitudes of their\n\nfourier transform. In Proc. IEEE Int. Symp. Inform. Theory (ISIT), pages 1473\u20131477. IEEE, 2012.\n\n[55] Y. Shechtman, A. Beck, and Y. C. Eldar. Gespar: Ef\ufb01cient phase retrieval of sparse signals. IEEE Trans.\n\nSig. Proc., 62(4):928\u2013938, 2014.\n\n[56] C. Hegde, P. Indyk, and L. Schmidt. Fast algorithms for structured sparsity. Bulletin of the EATCS,\n\n1(117):197\u2013228, Oct. 2015.\n\n[57] R. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. IEEE Trans. Inform. Theory,\n\n56(6):2980\u20132998, 2010.\n\n[58] B. Laurent and P. Massart. Adaptive estimation of a quadratic functional by model selection. Ann. Stat.,\n\npages 1302\u20131338, 2000.\n\n[59] C. Davis and W. Kahan. The rotation of eigenvectors by a perturbation. iii. SIAM J. Num. Anal., 7(1):1\u201346,\n\n1970.\n\n[60] V. Bentkus. An inequality for tail probabilities of martingales with differences bounded from one side. J.\n\nTheoretical Prob., 16(1):161\u2013173, 2003.\n\n11\n\n\f", "award": [], "sourceid": 2542, "authors": [{"given_name": "Gauri", "family_name": "Jagatap", "institution": "Iowa State University"}, {"given_name": "Chinmay", "family_name": "Hegde", "institution": "Iowa State University"}]}