NeurIPS 2019
Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Paper ID:8412
Title:Algorithmic Guarantees for Inverse Imaging with Untrained Network Priors

Reviewer 1


		
This paper presents compressed sensing style recovery guarantees for solving inverse imaging problems with untrained network priors, as initially investigated (empirically) in [9] and [10]. The work extends RIP-like conditions for exact recovery under a “deep decoder” prior for the image [1] (i.e., assumes the image is in the range of a convolutional neural network of a certain structure). In particular, they prove for a d-dimensional image, n x d Gaussian random matrices satisfy an RIP condition for images in the range of deep decoder network provided n is on the order of the dimension of the initial layer of the network. Using this RIP property they prove that a projected gradient descent algorithm converges linearly to a solution of the optimization problem with the deep decoder prior. The paper also extends this analysis to an algorithm for solving compressive phase retrieval. This paper makes several excellent contributions. To my knowledge, this is the first paper to give compressed sensing style recovery guarantees for solving inverse problems with deep network priors like those in [9] and [10]. Also, to my knowledge, this is the first work applying these methods to the compressive phase retrieval problem. The paper is very clearly written. In particular, the proof sketches give useful details for understanding how the main results were obtained, and indicate clearly what earlier results they build off of (namely, ref [6]). Generally my appraisal is very positive, and I think this paper is a clear accept. However, I see some minor limitations of this work. One limitation with the present analysis (acknowledged by the authors) is that the algorithm studied is somewhat idealized in that it assumes the projection onto the range of the deep net can be computed exactly. This problem is non-convex, and standard solvers for this subproblem, such as gradient descent, could get stuck in local minima. It would be interesting if the present analysis could be extended to the case where this subproblem is only solved inexactly, perhaps even with one gradient step. Also, the sampling complexity n = O(k_1(k_2 + k_3 + ... + k_L)log d) seems suboptimal -- is the dependence on the size of intermediate layers k_2, k_3,...k_L truly necessary or an artifact of the proof technique? Some comments on this might be helpful for the reader to understand if the results are order-wise optimal or not.

Reviewer 2


		
Authors could further explain why should the Assumption in Def 1 hold for natural images e.g. for the considered architecture in eq3? Also, If the model approximates natural images e.g. similar to wavelets, what's the approximation bound based on DIP size {k_l, d_l}? Why DIP approach could beat compressed sensing algorithms using learned priors? Theorem1 and its proof are not very new. Results for iterative hard thresholding for sparse coding and its generalised variants for nonconvex signal constraints, cover similar claims in Theorem 1. Also theorem 1 is said to guarantee image recovery by algorithm 1. But algorithm 1 itself has a line 4 which we do not know how to exactly solve it i.e. the projection step (similar for theorem 2). Therefore global convergence guarantees are not really meaningful/practical to be discussed. What is the advantage of NetPGD over NetGD? experiments show similar runtime although NetPGD may require solving subproblems at each iteration. Further experiment details could make this point clear. Statement of Theorems 1&2: there should be a condition on step size somewhere Line 462 (typo) two W1s

Reviewer 3


		
This submission aims to establish recovery guarantees for deep image prior methods used in solving (linear) inverse problems. Motivated by compressive imaging and phase retrieval, a projection gradient descent (PGD) approach is adopted that entails projecting onto the range of deep networks. For Gaussian measurement matrices the convergence of the PGD method to the ground truth is established for compressive imaging as well as the phase retrieval observation models. Strong points: -recovery guarantees for deep image prior is a very important problem -the technical analysis is solid Weak points: -the novelty of the proof technique seems to be incremental; it seems to be a combination of the results in [Bora et al’17] for RIP and [Oymak et al’17] (not cited, listed below) for linear convergence of linear inverse problems with nonconvex regularization. The authors need to clarify the challenges that deep image prior model introduces. [Oymak et al’17] Oymak S, Recht B, Soltanolkotabi M. Sharp time–data tradeoffs for linear inverse problems. IEEE Transactions on Information Theory. 2017 Nov 14;64(6):4129-58. -the experiments are not convincing; they do not support the claims reported in this submission; the performance of the deep image prior is already reported in terms of improved reconstruction quality compared with the existing untrained schemes; it would be of interest to empirically assess the linear convergence behavior -the satisfaction of RIP conditions is also very important to verify; given the dataset of images sampled from the prior and the computational platforms with deep learning APIs it would be very useful to evaluate approximations of RIPs for the considered examples