{"title": "Blind Calibration in Compressed Sensing using Message Passing Algorithms", "book": "Advances in Neural Information Processing Systems", "page_first": 566, "page_last": 574, "abstract": "Compressed sensing (CS) is a concept that allows to acquire compressible signals with a small number of measurements. As such, it is very attractive for hardware implementations. Therefore, correct calibration of the hardware is a central issue. In this paper we study the so-called blind calibration, i.e. when the training signals that are available to perform the calibration are sparse but unknown. We extend the approximate message passing (AMP) algorithm used in CS to the case of blind calibration. In the calibration-AMP, both the gains on the sensors and the elements of the signals are treated as unknowns. Our algorithm is also applicable to settings in which the sensors distort the measurements in other ways than multiplication by a gain, unlike previously suggested blind calibration algorithms based on convex relaxations. We study numerically the phase diagram of the blind calibration problem, and show that even in cases where convex relaxation is possible, our algorithm requires a smaller number of measurements and/or signals in order to perform well.", "full_text": "Blind Calibration in Compressed Sensing using\n\nMessage Passing Algorithms\n\nChristophe Sch\u00a8ulke\n\nUniv Paris Diderot, Sorbonne Paris Cit\u00b4e,\n\nESPCI and CNRS UMR 7083\n\nParis 75005, France\n\nFlorent Krzakala\n\nENS and CNRS UMR 8550,\nESPCI and CNRS UMR 7083\n\nParis 75005, France\n\nFrancesco Caltagirone\n\nInstitut de Physique Th\u00b4eorique\n\nCEA Saclay and CNRS URA 2306\n\n91191 Gif-sur-Yvette, France\n\nLenka Zdeborov\u00b4a\n\nInstitut de Physique Th\u00b4eorique\n\nCEA Saclay and CNRS URA 2306\n\n91191 Gif-sur-Yvette, France\n\nAbstract\n\nCompressed sensing (CS) is a concept that allows to acquire compressible signals\nwith a small number of measurements. As such it is very attractive for hardware\nimplementations. Therefore, correct calibration of the hardware is a central is-\nsue. In this paper we study the so-called blind calibration, i.e. when the training\nsignals that are available to perform the calibration are sparse but unknown. We\nextend the approximate message passing (AMP) algorithm used in CS to the case\nof blind calibration. In the calibration-AMP, both the gains on the sensors and the\nelements of the signals are treated as unknowns. Our algorithm is also applica-\nble to settings in which the sensors distort the measurements in other ways than\nmultiplication by a gain, unlike previously suggested blind calibration algorithms\nbased on convex relaxations. We study numerically the phase diagram of the blind\ncalibration problem, and show that even in cases where convex relaxation is pos-\nsible, our algorithm requires a smaller number of measurements and/or signals in\norder to perform well.\n\n1\n\nIntroduction\n\nThe problem of acquiring an N-dimensional signal x through M linear measurements, y = F x,\narises in many contexts. The Compressed Sensing (CS) approach [1, 2] exploits the fact that, in\nmany cases of interest, the signal is K-sparse (in an appropriate known basis), meaning that only\nK = \u03c1N out of the N components are non-zero. Compressed sensing theory shows that a K-sparse\nN-dimensional signal can be reconstructed from far less than N linear measurements [1, 2], thus\nsaving acquisition time, cost or increasing the resolution. In the most common setting, the linear\nM \u00d7 N map F is considered to be known.\nNowadays, the concept of compressed sensing is very attractive for hardware implementations.\nHowever, one of the main issues when building hardware revolves around calibration. Usually the\nsensors introduce a distortion (or decalibration) to the measurements in the form of some unknown\ngains. Calibration is about how to determine the transfer function between the measurements and\nthe readings from the sensor. In some applications dealing with distributed sensors or radars for\ninstance, the location or intrinsic parameters of the sensors are not exactly known [3, 4]. Similar\ndistortion can be found in applications with microphone arrays [5]. The need for calibration has\nbeen emphasized in a number of other works, see e.g. [6, 7, 8]. One common way of dealing with\ncalibration (apart from ignoring it or considering it as measurement noise) is supervised calibration\n\n1\n\n\fwhen some known training signals xl, l = 1, . . . , P and the corresponding observations yl are used\nto estimate the distortion parameters. Given a sparse signal recovery problem, if we were not able\nto previously estimate the distortion parameters via supervised calibration, we will need to estimate\nthe unknown signal and the unknown distortion parameters simultaneously - this is known as blind\n(unsupervised) calibration. If such blind calibration is computationally possible, then it might be\nsimpler to do than the supervised calibration in practice. The main contribution of this paper is a\ncomputationally ef\ufb01cient message passing algorithm for blind calibration.\n\n1.1 Setting\n\nreading (measure) y\u00b5 = h(z\u00b5, d\u00b5, w\u00b5) where z\u00b5 =(cid:80)N\n\nWe state the problem of blind calibration in the following way. First we introduce an unknown\ndistortion parameter (we will also use equivalently the term decalibration parameter or gain) d\u00b5 for\neach of the sensors, \u00b5 = 1, . . . , M. Note that d\u00b5 can also represent a vector of several parameters.\nWe consider that the signal is linearly projected by a known M \u00d7 N measurement matrix F and\nonly then distorted according to some known transfer function h. This transfer function can be\nprobabilistic (noisy), non-linear, etc. Each sensor \u00b5 then provides the following distorted and noisy\ni=1 F\u00b5ixi. As often in CS, we focus on the\ncase where the measurement matrix F is iid Gaussian with zero mean. For the measurement noise\nw\u00b5, one usually considers an iid Gaussian noise with variance \u2206, which is added to z\u00b5.\nIn order to perform the blind calibration, we need to measure several statistically diverse signals.\nGiven a set of N-dimensional K-sparse signals xl with l = 1,\u00b7\u00b7\u00b7 , P , for each of the signals we\nconsider M sensor readings\n\nN(cid:88)\n\ny\u00b5l = h(z\u00b5l, d\u00b5, w\u00b5l) , where\n\nz\u00b5l =\n\nF\u00b5ixil ,\n\n(1)\n\ni=1\n\nwhere d\u00b5 are the signal-independent distortion parameters, w\u00b5l is a signal-dependent measurement\nnoise, and h is an arbitrary known function of these variables with standard regularity requirements.\nTo illustrate a situation in which one has sample dependent noise w\u00b5l and sample independent\ndistortion d\u00b5, consider for instance sound sensors placed in space at positions d\u00b5 that are not exactly\nknown. The positions, however, do not change when different sounds are recorded. The noise w\u00b5l\nis then the ambient noise that is different during every recording.\nThe \ufb01nal inference problem is hence as follows: Given the M \u00d7 P measurements y\u00b5l and a perfect\nknowledge of the matrix F , we want to infer both the P different signals {x1,\u00b7\u00b7\u00b7 xP} and the M\ndistortion parameters d\u00b5, \u00b5 = 1,\u00b7\u00b7\u00b7 M. In this work we place ourselves in the Bayesian setting\nwhere we assume the distribution of the signal elements, PX, and the distortion coef\ufb01cients, PD, to\nbe known.\n\n1.2 Relation to previous work\n\nAs far as we know, the problem of blind calibration was \ufb01rst studied in the context of compressed\nsensing in [9] where the distortions were considered as multiplicative, i.e. the transfer function was\n\nh(z\u00b5l, d\u00b5, w\u00b5l) =\n\n1\nd\u00b5\n\n(z\u00b5l + w\u00b5l) .\n\n(2)\n\nA subsequent work [10] considers a more general case when the distortion parameters are d\u00b5 =\n(g\u00b5, \u03b8\u00b5), and the transfer function h(z\u00b5l, d\u00b5, w\u00b5l) = ei\u03b8\u00b5 (z\u00b5l + w\u00b5l)/g\u00b5. Both [9] and [10] applied\nconvex optimization based algorithms to the blind calibration problem and their approach seems\nto be limited to the above special cases of transfer functions. Our approach is able to deal with a\ngeneral transfer function h, and moreover for the product-transfer-function (2) it outperforms the\nalgorithm of [9].\nThe most commonly used algorithm for signal reconstruction in CS is the (cid:96)1 minimization of [1].\nIn CS without noise and for measurement matrices with iid Gaussian elements, the (cid:96)1 minimization\nalgorithm leads to exact reconstruction as long as the measurement rate \u03b1 = M/N > \u03b1DT in the\nlimit of large signal dimension, where \u03b1DT is a well known phase transition of Donoho and Tanner\n[11]. The blind calibration algorithm of [9, 10] also directly uses (cid:96)1 minimization for reconstruction.\n\n2\n\n\fIn the last couple of years, the theory of CS witnessed a large progress thanks to the development of\nmessage passing algorithms based on the standard loopy Belief Propagation (BP) and their analysis\n[12, 13, 14, 15, 16]. In the context of compressed sensing, the canonical loopy BP is dif\ufb01cult to im-\nplement because its messages would be probability distributions over a continuous support. At the\nsame time in problems such as compressed sensing, Gaussian or quadratic approximation of BP still\ncontains the information necessary for a successful reconstruction of the signal. Such approxima-\ntions of loopy BP originated in works on CDMA multiuser detection [17, 18]. In compressed sensing\nthe Gaussian approximation of BP is known as the approximate message passing (AMP) [12, 13],\nand it was used to prove that with properly designed measurement matrices F the signal can be\nreconstructed as long as the number of measurements is larger than the number of non-zero compo-\nnent in the signal, thus closing the gap between the Donoho-Tanner transition and the information\ntheoretical lower bound [15, 16]. Even without particular design of the measurement matrices the\nAMP algorithm outperforms the (cid:96)1-minimization for a large class of signals. Importantly for the\npresent work, [14] generalized the AMP algorithm to deal with a wider range of input and output\nfunctions. For some of those, generalizations of the (cid:96)1-minimization based approach are not convex\nanymore, and hence they do not have the advantage of provable computational tractability anymore.\nThe following two works have considered blind calibration related problems with the use of AMP-\nlike algorithms. In [19] the authors use AMP combined with expectation maximization to calibrate\ngains that act on the signal components rather than on the measurement components as we consider\nhere. In [20] the authors study the case when every element of the measurement matrix F has to be\ncalibrated, in contrast to the row-constant gains considered in this paper. The setting of [20] is much\ncloser to the dictionary learning problem and is much more demanding, both computationally and\nin terms of the number of different signals necessary for successful calibration.\n\n1.3 Contributions\n\nIn this work we extend the generalized approximate message passing (GAMP) algorithm of [14]\nto the problem of blind calibration with a general transfer function h, eq. (1). We denote it as the\ncalibration-AMP or Cal-AMP algorithm. The Cal-AMP uses P > 1 unknown sparse signals to learn\nboth the different signals xl, l = 1, . . . , P , and the distortion parameters d\u00b5, \u00b5 = 1, . . . , M, of the\nsensors. We hence overcome the limitations of the blind calibration algorithm presented in [9, 10]\nto the class of settings for which the calibration can be written as a convex optimization problem.\nIn the second part of this paper we analyze the performance of Cal-AMP for the product transfer\nfunction (2) used in [9] and demonstrate its scalability and better performance with respect to their\n(cid:96)1-based calibration approach. In the numerical study we observe a sharp phase transition generaliz-\ning the phase transition seen for AMP in compressed sensing [21]. Note that for the blind calibration\nproblem to be solvable, we need the amount of information contained in the sensor readings, P M,\nto be at least as large as the size of the vector of distortion parameters M, plus the number of the\nnon-zero components of all the signals, KP . De\ufb01ning \u03c1 = K/N and \u03b1 = M/N, this leads to\n\u03b1P \u2265 \u03c1P + \u03b1. If we \ufb01x the number of signals P we have a well de\ufb01ned line in the (\u03c1, \u03b1)-plane\ngiven by\n\n(3)\nbelow which exact calibration cannot be possible. We will compare the empirically observed phase\ntransition for blind calibration to this theoretical bound as well as to the phase transition that would\nhave been observed in the pure CS, i.e. if we knew the distortion parameters.\n\nP \u2212 1\n\n\u03b1 \u2265 P\n\n\u03c1 \u2261 \u03b1min ,\n\n2 The Calibration-AMP algorithm\n\nThe Cal-AMP algorithm is based on a Bayesian probabilistic formulation of the reconstruction\nproblem. Denoting PX (xil) the assumed empirical distribution of the components of the signal,\nPW (w\u00b5l) the assumed probability distribution of the components of the noise, and PD(d\u00b5) the as-\nsumed empirical distribution of the distortion parameters, the Bayes formula yields\nP (x, d|F, y) =\n\ndw\u00b5lPW (w\u00b5l)\u03b4 [y\u00b5l \u2212 h (z\u00b5l, d\u00b5, w\u00b5l)] ,\n\nP,M(cid:89)\n\nN,P(cid:89)\n\nPX (xil)\n\nPD(d\u00b5)\n\nM(cid:89)\n\n(cid:90)\n\n1\nZ\n\ni,l=1\n\n\u00b5=1\n\nl,\u00b5=1\n\n(4)\n\n3\n\n\f\u00b5(d\u00b5) =(cid:82)(cid:81)\n\nwhere Z is a normalization constant and z\u00b5l =(cid:80)\n\nil(xil) = (cid:82)(cid:81)\n(cid:81)\n\u00b5 dd\u00b5\nil dxil P (x, d|F, y). The estimators x\u2217\n\n(cid:81)\ni F\u00b5ixil. We denote the marginals of the signal\njn(cid:54)=il dxjn P (x, d|F, y) and those of the distortion parame-\ncomponents \u03bdx\nil that minimizes the expected\nters \u03bdd\nmean-squared error (MSE) of the signals and the estimator d\u2217\n\u00b5 of the distortion parameters are the av-\nerages w.r.t. the marginal distributions, namely x\u2217\n\u00b5(d\u00b5).\nAn exact computation of these estimates is not tractable in any known way so we use instead a\nbelief-propagation based approximation that has proven to be fast and ef\ufb01cient in the CS problem\n[12, 13, 14]. We remind that GAMP, that leads to a considerably simpler inference problem, is re-\ncovered if we set PD(d\u00b5) = \u03b4(d\u00b5\u22121) and that usual AMP is recovered by setting h(z, d, w) = z+w\non top of it.\n\nil =(cid:82) dxil xil \u03bdx\n\n\u00b5 =(cid:82) dd\u00b5 d\u00b5 \u03bdd\n\nil(xil) and d\u2217\n\n\u03b3(cid:54)=\u00b5 dd\u03b3\n\nFigure 1: Graphical model representing the blind calibration problem. Here the dimensionality of\nthe signal is N = 8, the number of sensors is M = 3, and the number of signals used for calibration\nP = 2. The variable nodes xil and d\u00b5 are depicted as circles, the factor nodes as squares.\n\n(cid:90)\n(cid:90)\n\nGiven the factor graph representation of the calibration problem in Fig. 1, the canonical belief prop-\nagation equations for the probability measure (4) are written in terms of N P M pairs of messages\n\u02dcm\u00b5l\u2192il(xil) and mil\u2192\u00b5l(xil), representing probability distributions on the signal component xil,\nand P M pairs of messages n\u00b5\u2192\u00b5l(d\u00b5) and \u02dcn\u00b5l\u2192\u00b5(d\u00b5), representing probability distributions on\nthe distortion parameter d\u00b5. Following the lines of [12, 13, 14, 15], with the use of the central\nlimit theorem, a Gaussian approximation, and neglecting terms that go to zero as N \u2192 \u221e, the BP\nequations can be closed using only the means and variances of the messages mil\u2192\u00b5l and n\u00b5\u2192\u00b5l:\n\nk\u00b5\u2192\u00b5l =\n\nvil\u2192\u00b5l =\n\nail\u2192\u00b5l =\n\ndd\u00b5 n\u00b5\u2192\u00b5l(d\u00b5) d\u00b5 ,\n\ndxil mil\u2192\u00b5l(xil) x2\n\ndxil mil\u2192\u00b5l(xil) xil ,\n\n(cid:90)\ni F\u00b5iail\u2192\u00b5l and V\u00b5l = (cid:80)\nvariables and factor nodes. For this we introduce \u03c9\u00b5l = (cid:80)\n\n(6)\nMoreover, again neglecting only terms that go to zero as N \u2192 \u221e, we can write closed equations on\nquantities that correspond to the variables and factors nodes, instead of messages running between\n\u00b5ivil\u2192\u00b5l.\nThe derivation of the Cal-AMP algorithm is similar to those in [12, 13, 14, 15]. The resulting\nalgorithm is in the leading order equivalent to the belief propagation for the factor graph from Fig. 1.\nTo summarize the resulting algorithm we de\ufb01ne\n\nil \u2212 a2\n\u00b5 \u2212 k2\n\ndd\u00b5 n\u00b5\u2192\u00b5l(d\u00b5) d2\n\nil\u2192\u00b5l ,\n\n(5)\n\nl\u00b5\u2192\u00b5l =\n\n\u00b5\u2192\u00b5l .\n\ni F 2\n\n(cid:90)\n\n(cid:90)\n\n(cid:34)(cid:90)\n\nP(cid:89)\n\n\u02dcG(y, d, \u03c9, v) =\n\ndz dw PW (w) \u03b4[h(z, d, w) \u2212 y] e\u2212 1\n\n2\n\n(z\u2212\u03c9)2\n\nv\n\n,\n\n(cid:35)\n\nand\n\n(7)\n\nG(y\u00b5\u00b7, \u03c9\u00b5\u00b7, V\u00b5\u00b7, \u03b8) = ln\n\n(8)\nwhere \u00b5\u00b7 indicates a dependence on all the variables labeled \u00b5n with n = 1,\u00b7\u00b7\u00b7 , P , and \u03b4(\u00b7) is the\nDirac delta function. Similarly as Rangan in [14], we de\ufb01ne P output functions as\n\n\u02dcG(y\u00b5n, d, \u03c9\u00b5n, V\u00b5n) e\u03b8d\n\ndd PD(d)\n\nn=1\n\n,\n\ngl\nout(y\u00b5\u00b7, \u03c9\u00b5\u00b7, V\u00b5\u00b7) =\n\n\u2202\n\n\u2202\u03c9\u00b5l\n\nG(y\u00b5\u00b7, \u03c9\u00b5\u00b7, V\u00b5\u00b7, \u03b8 = 0) .\n\n(9)\n\nNote that each of the output functions depend on all the P different signals. We also de\ufb01ne the\nfollowing input functions\nf x\na (\u03a32, R) = [x]X ,\n\nc (\u03a32, R) = [x2]X \u2212 [x]2\nf x\nX ,\n\n(10)\n\n4\n\n\f(11)\n\n(12)\n\n(13)\n\n\u03c9t+1\n\n\u00b5l =\n\n(cid:88)\n\u00b5l = \u2212 \u2202\nht+1\n\u2202\u03c9\u00b5l\n\ni\n\n(cid:34)(cid:88)\n\n\u00b5\nc ((\u03a3t+1\n\n(cid:35)\n\n(cid:88)\n(cid:34)(cid:88)\n\ni\n\n\u00b5\na ((\u03a3t+1\n\n(cid:35)\u22121\n\n(cid:12)(cid:12)(cid:12)\u03b8=0\n\nwhere [. . . ]X indicates expectation w.r.t. the measure\n\nMX (x, \u03a32, R) =\n\n1\n\nZ(\u03a32, R)\n\nPX (x) e\n\n\u2212 (x\u2212R)2\n2\u03a32\n\n.\n\nGiven the above de\ufb01nitions, the iterative calibration-AMP algorithm reads as follows:\n\nV t+1\n\u00b5l\n\n=\n\nF 2\n\u00b5i vt\n\nil ,\n\nF\u00b5iat\n\nil \u2212 V t+1\n\n\u00b5l\n\net+1\n\u00b5l\n\n,\n\net+1\n\u00b5l\n\n= gl\n\nout(y\u00b5\u00b7, \u03c9t\n\n\u00b5\u00b7, V t\n\n\u00b5\u00b7) ,\n\ngl\nout(y\u00b5\u00b7, \u03c9t\n\n\u00b5\u00b7, V t+1\n\n\u00b5\u00b7\n\n) ,\n\n(\u03a3t+1\n\nil\n\n)2 =\n\n\u00b5i ht+1\nF 2\n\u00b5l\n\n,\n\nRt+1\n\nil = ail +\n\nF\u00b5i et+1\n\u00b5l\n\n(\u03a3t+1\n\nil\n\n)2 ,\n\n(14)\n\nat+1\nil\n\n= f x\n\n)2, Rt+1\n\n(15)\nas the mean and variance of the assumed distribution PX (\u00b7),\nwe initialize \u03c9t=0\nand iterate these equations until convergence. At every time-step the quantity ail is the estimate for\nthe signal element xil, and vil is the approximate error of this estimate. The estimate and its error\nfor the distortion parameter d\u00b5 can be computed as\n\nil\nil\nand vt=0\n\n\u00b5l = y\u00b5l, at=0\n\nvt+1\nil = f x\n\n)2, Rt+1\n\n) ,\n\n) ,\n\nil\n\nil\n\nil\n\nil\n\n\u2202\n\u2202\u03b8\n\nand\n\nkt+1\n\u00b5 =\n\nG(yt+1\n\n\u00b5\u00b7\n\n, \u03c9t+1\n\n\u00b5\u00b7\n\n, V t+1\n\n\u00b5\u00b7\n\n, \u03b8)\nBy setting PD(d\u00b5) = \u03b4(d\u00b5 \u2212 dtrue\n), and simplifying eq. (8), readers familiar with the work of\nRangan [14] will recognize the GAMP algorithm in eqs. (12-15). Note that for a general transfer\nfunction h the generating function G (8) has to be evaluated numerically. The overall complex-\nity of the Cal-AMP algorithm scales as O(M N P ) and hence shares the scalability advantages of\nAMP [12].\n\n, \u03b8)\n\n\u00b5\n\n.\n\n, \u03c9t+1\n\n\u00b5\u00b7\n\n, V t+1\n\n\u00b5\u00b7\n\nlt+1\n\u00b5 =\n\n(16)\n\n\u22022\n\u2202\u03b82 G(yt+1\n\u00b5\u00b7\n\n(cid:12)(cid:12)(cid:12)\u03b8=0\n\n2.1 Cal-AMP for the product transfer function\n\nIn the numerical section of this paper we will focus on a speci\ufb01c case of the transfer function\nh(z\u00b5l, d\u00b5, w\u00b5l), de\ufb01ned in eq. (2). We consider the measurement noise w\u00b5l to be Gaussian of zero\nmean and variance \u2206. This transfer function was considered in the work of [9] and we will hence\nbe able to compare the performance of Cal-AMP directly to the convex optimization investigated\nin [9]. For the product transfer function eq. (2) most integrals requiring a numerical computation in\nthe general case are expressed analytically and we can replace equations (13) by:\n\n\u00b5l\n\n\u00b5y\u00b5l \u2212 \u03c9t\nkt\n(cid:35)\u22121\nV t\n\u00b5l + \u2206\n\net+1\n\u00b5l =\n\n(cid:34)(cid:88)\n\n,\n\n,\n\n) ,\n\n(C t+1\n\n\u00b5 )2 =\n\ny2\n\u00b5n\nV t+1\n\u00b5n + \u2206\nn\nkt+1\n\u00b5 = f d\na ((C t+1\n\u00b5 )2, T t+1\nwhere we have introduced the functions f d\ntation is made w.r.t. to the measure\n\n\u00b5\n\nht+1\n\u00b5l =\n\n\u2212\n\n1\nV t+1\n\u00b5l + \u2206\n\n\u00b5 )2 (cid:88)\n\nT t+1\n\u00b5 = (C t+1\n\n\u00b5y2\nlt\n\u00b5l\n(V t+1\n\n\u00b5l + \u2206)2\n\n,\n\ny\u00b5n\u03c9t+1\n\u00b5n\nV t+1\n\u00b5n + \u2206\n\u00b5 )2, T t+1\n\n) .\n\nn\n\n,\n\n(17)\n\n(18)\n\nc ((C t+1\n\nlt+1\n\u00b5 = f d\nc similarly to those in eq. (10), except the expec-\n\n(19)\n\n\u00b5\n\na and f d\n\nMD(d, C 2, T ) =\n\n1\n\nZ(C 2, T )\n\nPD(d)|d|P e\n\n\u2212 (d\u2212T )2\n2C2\n\n.\n\n(20)\n\n3 Experimental results\n\nOur simulations were performed using a MATLAB implementation of the Cal-AMP algorithm pre-\nsented in the previous section, that is available online [22]. We focused on the noiseless case \u2206 = 0\nfor which exact reconstruction is conceivable. We tested the algorithm on randomly generated\nGauss-Bernoulli signals with density of non-zero elements \u03c1, normally distributed around zero with\n\n5\n\n\f\u221a\n\nunit variance. For the present experiments the algorithm is using this information via a matching\ndistribution PX (xil). The situation when PX mismatches the true signal distribution was discussed\nfor AMP for compressed sensing in [21].\nThe distortion parameters d\u00b5 were generated from a uniform distribution centered at d = 1 with\n3\u03c3. This ensures that, as \u03c32 \u2192 0, the results of standard compressed\nvariance \u03c32 and width 2\nsensing are recovered, while the distortions are growing with \u03c32 . For numerical stability purposes,\nthe parameter \u03c32 used in the update functions of Cal-AMP was taken to be slightly larger than\nthe variance used to create the actual distortion parameters. For the same reasons, we have also\nadded a small noise \u2206 = 10\u221217 and used damping in the iterations in order to avoid oscillatory\nbehavior. In this noiseless case we iterate the Cal-AMP equations until the following quantity crit =\ni F\u00b5iail)2 becomes smaller than the numerical precision of implementation,\n\n(cid:80)\n\u00b5l (k\u00b5y\u00b5l \u2212(cid:80)\n\naround 10\u221216, or until that quantity does not decrease any more over 100 iterations.\nSuccess or failure of the reconstruction is usually determined by looking at the mean squared error\n(MSE) between the true signal x0\nl and the reconstructed one al. In the noiseless setting the product\ntransfer function h leads to a scaling invariance and therefore a better measure of success is the\ncross-correlation between real and recovered signal (used in [10]) or a corrected version of the\nMSE, de\ufb01ned by:\n\n1\n\nM P\n\n(cid:88)\n\n\u00b5\n\nd0\n\u00b5\nk\u00b5\n\n(21)\n\nMSEcorr =\n\n1\nN P\n\nil \u2212 \u02c6sail\n\n, where\n\n\u02c6s =\n\n1\nM\n\n(cid:88)\n\n(cid:0)x0\n\nil\n\n(cid:1)2\n\nis an estimation of the scaling factor s. Slight deviations between empirical and theoretical means\ndue to the \ufb01nite size of M and N lead to important differences between MSE and MSEcorr, only\nthe latter truly going to zero for \ufb01nite N and M.\n\nFigure 2: Phase diagrams for different numbers P of calibrating signals: The measurement rate\n\u03b1 = M/N is plotted against the density of the signal \u03c1 = K/N. The plotted value is the decimal\nlogarithm of MSEcorr (21) achieved for one random instance. Black indicates failure of the recon-\nstruction, while white represents perfect reconstruction (i.e. a MSE of the order of the numerical\nprecision). In this \ufb01gure the distortion variance is \u03c32 = 0.01 and N = 1000. While for P = 1\nreconstruction is never possible, for P > 1, there is a phase transition very close to the lower bound\nde\ufb01ned by \u03b1min in equation (3) or to the phase transition line of the pure compressed sensing prob-\nlem \u03b1CS. Note, however, that in the large N limit we expect the calibration phase transition to be\nstrictly larger than both the \u03b1min and \u03b1CS. Note also that while this diagram is usually plotted only\nfor \u03b1 \u2264 1 for compressed sensing, the part \u03b1 > 1 displays pertinent information in blind calibration.\n\nFig. 2 shows the empirical phase diagrams in the \u03b1-\u03c1 plane we obtained from the Cal-AMP algo-\nrithm for different number of signals P . For P = 1 the reconstruction is never exact, and effectively\nthis case corresponds to reconstruction without any attempt to calibrate. For any P > 1, there is\na sharp phase transition taking place with a jump in MSEcorr of ten orders of magnitude. As P\nincreases, the phase of exact reconstruction gets bigger and tends to the one observed in Bayesian\ncompressed sensing [15]. Remarkably, for small values of the density \u03c1, the position of the Cal-\nAMP phase transition is very close to the CS one already for P = 2 and Cal-AMP performs almost\nas well as in the total absence of distortion.\n\n6\n\nP = 1\u03b1\u03c100.5100.511.52P = 2\u03c100.5100.511.52P = 10\u03c1 00.5100.511.52\u03b1min\u03b1CS log10(MSEcorr)\u221214\u221212\u221210\u22128\u22126\u22124\fFigure 3: Left: Cal-AMP phase transition as the system size N grows. The curves are obtained by\naveraging log10(MSEcorr) over 100 samples, re\ufb02ecting the probability of correct reconstruction in\nthe region close to the phase transition, where it is not guaranteed. Parameters are: \u03c1 = 0.2, P = 2,\n\u03c32 = 0.0251. For higher values of N, the phase transition becomes sharper. Right: Mean number\nof iterations necessary for reconstruction, when the true signal is successfully recovered. Far from\nthe phase transition, increasing N does not increase visibly the number of iterations for these system\nsizes, showing that our algorithm works in linear time. The number of needed iterations increases\ndrastically as one approaches the phase transition.\n\nFigure 4: Left: Position of the phase transition in \u03b1 for different distortion variances \u03c32. The\nleft vertical line represents the position of the CS phase transition, the right one is the counting\nbound eq. (3). With growing distortion, larger measurement rates become necessary for perfect\nIntermediary values of MSEcorr are obtained in a region where\ncalibration and reconstruction.\nperfect calibration is not possible, but distortions are small enough for the uncalibrated AMP to\nmake only small mistakes. The parameters here are P = 2 and \u03c1 = 0.2. Right: Phase diagram\nas the variance of the distortions \u03c32 and the number of signals P vary, for \u03c1 = 0.5, \u03b1 = 0.75 and\nN = 1000.\n\nFig. 3 shows the behavior near the phase transition, giving insights about the in\ufb02uence of the system\nsize and the number of iterations needed for precise calibration and reconstruction. In Fig. 4, we\nshow the jump in the MSE on a single instance as the measurement rate \u03b1 decreases. The right part\nis the phase diagram in the \u03c32-P plane.\nIn [9, 10], a calibration algorithm using (cid:96)1-minimization has been proposed. While in that case, no\nassumption on the distribution of the signals and of the the gains is needed, for most practical cases\nit is expected to be less performant than the Cal-AMP if these distributions are known or reasonably\napproximated. We implemented the algorithm of [9] with MATLAB using the CVX package [23].\nDue to longer running times, experiments were made using a smaller system size N = 100. We\nalso remind at this point that whereas the Cal-AMP algorithm works for a generic transfer function\n(1), the (cid:96)1-minimization based calibration is restricted to the transfer functions considered by [9, 10].\nFig. 5 shows a comparison of the performances of the two algorithms in the \u03b1-\u03c1 phase diagrams. The\nCal-AMP clearly outperforms the (cid:96)1-minimization in the sense that the region in which calibration\nis possible is much larger.\n\n7\n\n0.450.50.55\u221215\u221210\u221250\u03b1\u2329 log10(MSEcorr) \u232a0.450.50.550.60100200300400500600\u03b1Iterations 50 200 400 1000 500010000N0.250.30.350.40.450.50.5510\u22121510\u22121010\u22125100\u03b1MSEcorr \u22120.4\u22121\u22122\u22123\u22125\u22127\u221212\u03b1min\u03b1CSlog10(\u03c32)log10(\u03c32)P \u22124\u22123\u22122\u22121246810121416log10(MSEcorr)\u221215\u221210\u221250\fFigure 5: Comparison of the empirical phase diagrams obtained with the Cal-AMP algorithm pro-\nposed here (top) and the (cid:96)1-minimization calibration algorithm of [9] (bottom) averaged over several\nrandom samples; black indicates failure, white indicates success. The area where reconstruction is\npossible is consistently much larger for Cal-AMP than for (cid:96)1-minimization-based calibration. The\nplotted lines are the phase transitions for CS without unknown distortions with the AMP algorithm\n(\u03b1CS, in red, from [21]), and with (cid:96)1-minimization (the Donoho-Tanner transition \u03b1DT, in blue,\nfrom [11]). The line \u03b1min is the lower counting bound from eq. (3). The advantage of Cal-AMP\nover (cid:96)1-minimization calibration is clear. Note that in both cases, the region close to the transition is\nblurred due to \ufb01nite system size, hence a region of grey pixels (again, the effect is more pronounced\nfor the (cid:96)1 algorithm).\n\n4 Conclusion\n\nWe have presented the Cal-AMP algorithm for blind calibration in compressed sensing, a problem\nwhere the outputs of the measurements are distorted by some unknown gains on the sensors, eq. (1).\nThe Cal-AMP algorithm allows to jointly infer sparse signals and the distortion parameters of each\nsensor even with a very small number of signals and is computationally as ef\ufb01cient as the GAMP\nalgorithm for compressed sensing [14]. Another advantage w.r.t. previous works is that the Cal-\nAMP algorithm works for generic transfer function between the measurements and the readings from\nthe sensor, not only those that permit a convex formulation of the inference problem as in [9, 10]. In\nthe numerical analysis, we focussed on the case of the product transfer function (2) studied in [9].\nOur results show that, for the chosen parameters, calibration is possible with a very small number\nof different sparse signals P (i.e. P = 2 or P = 3), even very close to the absolute minimum\nof measurements required by a counting bound (3). Comparison with the (cid:96)1-minimizing calibration\nalgorithm clearly shows lower requirements on the measurement rate \u03b1 and on the number of signals\nP for Cal-AMP. The Cal-AMP algorithm for blind calibration is scalable and simple to implement.\nIts ef\ufb01ciency shows that supervised training is unnecessary. We expect Cal-AMP to become useful\nin practical compressed sensing implementations.\nAsymptotic analysis of AMP can be done using the state evolution approach [12].\nIn the case\nof Cal-AMP, however, analysis of the resulting state evolution equations is more dif\ufb01cult and has\nhence been postponed to future work. Future work also includes the study of the robustness to the\nmismatch between assumed and true distribution of signal elements and distortion parameters, as\nwell as the expectation-maximization based learning of the various parameters. Finally, the use\nof spatially coupled measurement matrices [15, 16] could further improve the performance of the\nalgorithm and make the phase transition asymptotically coincide with the information-theoretical\ncounting bound (3).\n\n8\n\nCal\u2212AMPL1\u03b1P = 200.5100.511.52P = 300.5100.511.52P = 500.5100.511.52P = 1000.5100.511.52\u03c1\u03b100.5100.511.52\u03c100.5100.511.52\u03c100.5100.511.52\u03c1 00.5100.511.52\u03b1min\u03b1CS\u03b1DT\u2329 log10(MSEcorr) \u232a\u221214\u221212\u221210\u22128\u22126\u22124\u22122\fReferences\n[1] E. J. Cand`es and T. Tao. Decoding by linear programming. IEEE Trans. Inform. Theory, 51:4203, 2005.\n[2] D. L. Donoho. Compressed sensing. IEEE Trans. Inform. Theory, 52:1289, 2006.\n[3] B. C. Ng and C. M. S. See. Sensor-array calibration using a maximum-likelihood approach.\n\nIEEE\n\nTransactions on Antennas and Propagation, 44(6):827\u2013835, 1996.\n\n[4] Z. Yang, C. Zhang, and L. Xie. Robustly stable signal recovery in compressed sensing with structured\n\nmatrix perturbation. IEEE Transactions on Signal Processing, 60(9):4658\u20134671, 2012.\n\n[5] R. Mignot, L. Daudet, and F. Ollivier. Compressed sensing for acoustic response reconstruction: Interpo-\nlation of the early part. In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics\n(WASPAA), pages 225\u2013228, 2011.\n\n[6] T. Ragheb, J. N Laska, H. Nejati, S. Kirolos, R. G Baraniuk, and Y. Massoud. A prototype hardware for\nrandom demodulation based compressive analog-to-digital conversion. In 51st Midwest Symposium on\nCircuits and Systems (MWSCAS), pages 37\u201340. IEEE, 2008.\n\n[7] J. A Tropp, J. N. Laska, M. F. Duarte, J. K Romberg, and R. G. Baraniuk. Beyond nyquist: Ef\ufb01cient\n\nsampling of sparse bandlimited signals. IEEE Trans. Inform. Theory, 56(1):520\u2013544, 2010.\n\n[8] P. J. Pankiewicz, T. Arildsen, and T. Larsen. Model-based calibration of \ufb01lter imperfections in the random\n\ndemodulator for compressive sensing. arXiv:1303.6135, 2013.\n\n[9] R. Gribonval, G. Chardon, and L. Daudet. Blind calibration for compressed sensing by convex optimiza-\nIn IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages\n\ntion.\n2713 \u2013 2716, 2012.\n\n[10] C. Bilen, G. Puy, R. Gribonval, and L. Daudet. Blind sensor calibration in sparse recovery using convex\n\noptimization. In 10th Int. Conf. on Sampling Theory and Applications, 2013.\n\n[11] D. L. Donoho and J. Tanner. Sparse nonnegative solution of underdetermined linear equations by linear\n\nprogramming. Proc. Natl. Acad. Sci., 102(27):9446\u20139451, 2005.\n\n[12] D. L. Donoho, A. Maleki, and A. Montanari. Message-passing algorithms for compressed sensing. Proc.\n\nNatl. Acad. Sci., 106(45):18914\u201318919, 2009.\n\n[13] D.L. Donoho, A. Maleki, and A. Montanari. Message passing algorithms for compressed sensing: I.\n\nmotivation and construction. In IEEE Information Theory Workshop (ITW), pages 1 \u20135, 2010.\n\n[14] S. Rangan. Generalized approximate message passing for estimation with random linear mixing. In Proc.\n\nof the IEEE Int. Symp. on Inform. Theory (ISIT), pages 2168 \u20132172, 2011.\n\n[15] F. Krzakala, M. M\u00b4ezard, F. Sausset, Y.F. Sun, and L. Zdeborov\u00b4a. Statistical physics-based reconstruction\n\nin compressed sensing. Phys. Rev. X, 2:021005, 2012.\n\n[16] D. L. Donoho, A. Javanmard, and A. Montanari. Information-theoretically optimal compressed sensing\nvia spatial coupling and approximate message passing. In Proc. of the IEEE Int. Symposium on Informa-\ntion Theory (ISIT), pages 1231\u20131235, 2012.\n\n[17] J. Boutros and G. Caire. Iterative multiuser joint decoding: Uni\ufb01ed framework and asymptotic analysis.\n\nIEEE Trans. Inform. Theory, 48(7):1772\u20131793, 2002.\n\n[18] Y. Kabashima. A cdma multiuser detection algorithm on the basis of belief propagation. J. Phys. A: Math.\n\nand Gen., 36(43):11111, 2003.\n\n[19] U. S. Kamilov, A. Bourquard, E. Bostan, and M. Unser. Autocalibrated signal reconstruction from linear\n\nmeasurements using adaptive gamp. online preprint, 2013.\n\n[20] F. Krzakala, M. M\u00b4ezard, and L. Zdeborov\u00b4a. Phase diagram and approximate message passing for blind\n\ncalibration and dictionary learning. ISIT 2013, arXiv:1301.5898, 2013.\n\n[21] F. Krzakala, M. M\u00b4ezard, F. Sausset, Y.F. Sun, and L. Zdeborov\u00b4a. Probabilistic reconstruction in com-\npressed sensing: Algorithms, phase diagrams, and threshold achieving matrices. J. Stat. Mech., P08009,\n2012.\n\n[22] http://aspics.krzakala.org/.\n[23] M. Grant and S. Boyd. CVX: Matlab software for disciplined convex programming, version 2.0 beta.\n\nhttp://cvxr.com/cvx, 2012.\n\n9\n\n\f", "award": [], "sourceid": 357, "authors": [{"given_name": "Christophe", "family_name": "Schulke", "institution": "ESPCI ParisTech"}, {"given_name": "Francesco", "family_name": "Caltagirone", "institution": "IPhT, CEA Saclay"}, {"given_name": "Florent", "family_name": "Krzakala", "institution": "\u00c9cole Normale Sup\u00e9rieure"}, {"given_name": "Lenka", "family_name": "Zdeborov\u00e1", "institution": "CEA Saclay and CNRS URA 2306"}]}