
Submitted by Assigned_Reviewer_12
Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http://nips.cc/PaperInformation/ReviewerInstructions)
This paper proposes a denoising algorithm based on nonlocal image statistics and patch repetition by combining the advantages of NLmeans and Exponentially Weighted Aggregation (EWA). The computation of the aggregated estimator is done using MCMC and results are comparable to stateoftheart algorithms.
Pluses: 1) the method seems simple and straightforward to implement. 2) the experiments and results are convincing.
Minus: In terms of explaining *why* the method works, the text leaves something to be desired. E.g., in the second paragraph of section 7 ("The proposed implementation proceeds in two identical iterations.") there is no explanation or motivation, besides that "it works". To be clear, most other parts of the method are somewhat better justified than the example I just gave, but in many cases the reader is left to guess why this or that choice was made or why it improved the results.
In general, the text can be improved quite a bit if more effort would be put into providing additional explanations.
Another example: ``..., several patches are overrepresented in the average and many patches are not selected.'' This is confusing: is the overrepresentation (or underrepresentation of other patches) a good thing or a bad thing? The mere fact that it is sparse (as mentioned in the next line) doesn't answer this question. May the authors please elaborate on this in their rebuttal?
Minor remarks: In reference [2], the year has been omitted (2005?). Line 083: ``enables''> ``enable''. Line 182: ``sens''> ``sense'' Line 230: ``equal weight'' > ``equal weights''. Line 307308: In this sentence, the word ``recently'' should probably appear only once, not twice. Q2: Please summarize your review in 12 sentences
The method is interesting and sufficiently novel (even though it is not groundbreaking), and the results are convincing. The authors could have improved the way in which they explain and motivate some of their choices and steps. Should the paper get accepted, I encourage them to try to improve this important issue. Having said that, overall my general impression is still positive. Submitted by Assigned_Reviewer_13
Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http://nips.cc/PaperInformation/ReviewerInstructions)
The paper proposes a new patchbased image denoising algorithm. The paper combines ideas from the popular NonlocalMeans algorithm and theoretical results from the statistics literature on the SURE risk and exponentially weighted aggregation of multiple estimators.
The proposed algorithm denoises each patch separately by the following procedure: An initial denoising algorithm is first applied (here DCT with $L$ different settings) to produce denoised versions of the image. The set of all overlapping patches in these seed images (together with the raw noisy image) supplies a large number of "weak" estimators. The proposed algorithm aggregates these "weak" estimators to produce an estimate of the clean image values. The patches are aggregated with SUREderived exponential weights (also complemented by a spatial proximity term).
NLmeans can be considered a special case of the proposed method in which the original noisy patches constitute the set of "weak" estimators.
The paper proposes an MCMCbased procedure to accelerate summation. It would be interesting to compare the proposed MCMC method with the acceleration heuristics used in the NLmeans algorithm (confine search to a small local window, fast ANN search etc). It would be nice to see this issue investigated if the paper gets accepted.
The authors do a good job evaluating the proposed method. It performs better than NLmeans and similarly to BM3D (which is considered to be the stateofart in image denoising and a fast implementation is available).
Small typo: 049: EPPL > EPLL Q2: Please summarize your review in 12 sentences
In my opinion, the paper is worth publishing. It proposes a relatively novel method and nicely puts it in context of prior work, brings some new perspective in the area, and the proposed algorithm performs around the stateofart. Submitted by Assigned_Reviewer_41
Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http://nips.cc/PaperInformation/ReviewerInstructions)
This paper proposes a general aggregation method for image denoising by combining multiple weakly denoised images from standard methods. The proposed method is based on EWA and it employs MonteCarlo simulation for efficient approximation. The results are comparable to the stateoftheart algorithms.
It is nice for the authors to provide a thorough review on the image denoising literature and to discusses the difference of PEWA with existing denoising algorithms in the beginning. As far as I understand, the main contributions of this paper (Sec 4) compared to other "internal" methods are that PEWA generate candidate patches from weakly denoised images and that it merges candidate patches based on EWA. The aggregation method is flexible enough to combine any standard denoising algorithm and has a nice interpretation with Gibbs distribution. The experiments show that the aggregated image can be significantly better than each component, and good results can be obtained efficiently.
I have a concern on the flexibility of the proposed method. The authors criticize the "external" methods for not being flexible for unknown noise level in line 5557. But PEWA also relies on a known noise level. What if the noise variance is unknown? And what if the noise is not iid Gaussian?
I'm also worried about the fact that when combining a large number of denoised images, the quality will drop (line 244246). Would that suggest the SURE is not good enough to combine a large number of patches, or the MCMC approach is not mixing well?
In the experiment section, PEWA is shown to perform just similar to and sometimes worse than the stateoftheart. It would be useful to discuss any advantage of PEWA compared to BM3D and NLBayes.
Q2: Please summarize your review in 12 sentences
The proposed method is flexible enough to aggregate the results of any denoising algorithm and shows results comparable to the stateoftheart. However, it requires a known noise level, and its advantage against the stateoftheart algorithms is not clear. Q1:Author rebuttal: Please respond to any concerns raised in the reviews. There are no constraints on how you want to argue your case, except for the fact that your text should be limited to a maximum of 6000 characters. Note however, that reviewers and area chairs are busy and may not read long vague rebuttals. It is in your own interest to be concise and to the point. We thank the reviewers for their relevant comments. The reviewers’ criticisms are mainly concerned with the reasons why the PEWA method is able to produce results comparable to stateoftheart algorithms. Actually, the paper equally presents the theory and the experimental results/illustrations. Due to the lack of space, it was hard to meet these two objectives and the paper outlines the main features of PEWA, without addressing all concerns raised by the reviewers as it would be necessary.
Reviewer #12: In the rebuttal, we try to justify the choices we have made, as asked by the reviewer.
Most of stateoftheart algorithms (e.g. BM3D, NLBayes) use two iterations. The idea is to exploit the first iteration to help the algorithm to improve the overall result at the second iteration. For instance, NLBayes updates the covariance matrices of groups of patches and computes an improved estimator at the second iteration. In our approach, the denoised image obtained at the first iteration is potentially oversmoothed at some location. At the second iteration, PEWA combines the “first” patch estimators and the “weak” patch estimators to better restore the oversmoothed structures. For low signaltonoise ratios, PEWA considers the “first” patch estimators as “good” new candidates and the unwanted residual noise of the first iteration is removed by the second averaging process.
PEWA exploits several denoised versions of the input image to adapt to a variety of local contexts. It is generally a hard task to fix a unique threshold (DCT, Wavelet Transform…) to denoise satisfyingly an image since important structural details are irremediably lost. Considering a set of weak denoised images allows us to overcome this difficulty and to preserve important structures slightly drowned in the noise.
In the MCMC sampling, some patches are more frequently selected than others at a given location. The number of occurrences of a particular candidate patch can be evaluated. In constant image areas, there is probably no preference for any one patch over any other but we expect to select a low number of candidate patches along image contours and discontinuities. The adjective “sparse” was maybe a misleading choice but we meant that the method is able to use a very small set of candidate patches for restoration at some location, depending on image contexts.
In this paper, we will try to better justify the other choices we have made while preserving the description of experimental results.
Reviewer #13: The MCMC procedure may be considered as an efficient computational method to calculate the NLmeans estimator as noticed by the reviewer. Nevertheless, it has been shown that considering all the image patches to restore a given pixel does not produce the best results (except for repeated textures) and it is computationally demanding. It is more recommended to exploit patches taken a semilocal neighborhood (e.g. 21 x 21 search windows) to get higher PSNR values. In our approach, the neighborhood is not fixed but the range is a priori controlled by the parameter \tau. It is then possible to select relevant patches located far from the central patch (not taken in a fixed neighborhood). As suggested, we can add a fair comparison with NLmeans by considering only noisy patches to constitute the set of “weak” estimators. This can be useful to demonstrate that combining several denoised images definitely plays a key role in PEWA.
Reviewer #41: For realworld noisy images, we use the method described in Kervrann and Boulanger (IEEE T. Image Processing 2006) to robustly estimate the noise variance. Generally, stateoftheart methods are based on a Gaussian noise model and generally provide very satisfying results in real applications. This Gaussian approximation is then considered as sufficient in most cases. For signaldependent noise, we suggest to compute spatially adaptive noise variances combined with PEWA. This was successfully experimented but not shown in this paper. Finally, the SURE estimator can be appropriately designed to remove specific noise models. For instance, PURE was recommended in Luisier et al. (Signal Processing 2010) to remove Poisson noise.
In line 244246, we mentioned that considering a large number L of denoised drops quality. In our experiments, the results look very similar for an unchanged number of MCMC iterations (T=1000). Increasing L would suggest to considering more MCMC samples since the space of candidate patches is higher.
The reviewer also noticed that SURE could be not good enough to combine a larger number of denoised images. Actually, we assume that all estimators are independent in PEWA since it is unfeasible to compute SURE as defined by Stein. However, investigating other robust influence functions (e.g. Huber, Leclerc…), similar to \phi(z) = z (see (4)), in the definition of the Gibbs energy could be envisaged.
Unlike BM3D and NLmeans, combining several “weak” estimators is an important feature of PEWA. This is not considered in BM3D even if Wavelet Transform or PCA have been investigated instead of DCT. Actually, the components of BM3D are well known in signal processing (DCT, Wiener filtering) but the combination proposed by Dabov et al. is under study yet since it is hard to do a better job of denoising. NLBayes computes the covariance matrices and the mean of groups of similar patches. It shares common features with BM3D in some sense and both algorithms use two iterations. Our method shares more common features with NLmeans, which is a more intuitive algorithm. PEWA does not explicitly group similar patches, contrary to NLBayes and BM3D.
We provide a more comprehensive approach than NLBayes and BM3D without sacrificing efficiency. The PEWA framework can mix, on a wellfounded statistical basis, several estimators coming from different methods (Wiener, DCT...) and robust statistics can be investigated.
 