__ Summary and Contributions__: This paper analyzes the performance of generalized vector-approximate message-passing (G-VAMP) on phase retrieval problem. Main contributions are 1) rigourous characterization of the algorithm for certain non-trivial cases of measurement matrix 2) characterize the weak recover threshold and strong recover threshold for different designs of measurement matrix.

__ Strengths__: Theorems look sound and I believe the significance of the paper is somewhat around or above the borderline. The phase retrieval problem regarded as non-convex optimization problem is definitely related to NeurIPS community.

__ Weaknesses__: One major weakness of the paper is that the rigourous analysis on the performance of G-VAMP is limitedly conducted to the case when either the measurement matrix is Gaussian or unknown signal is Gaussian. Also, the analysis of weak and strong threshold is also conducted on special cases and there is no unified framework to include all these special cases. One minor weakness (mainly due to the algorithm itself) is that G-VAMP as I understand require certain knowledge about the distribution of the true signal. I suggest the authors should point it out and present the algorithm as well for the convenience of the readers who are not familiar with AMP literature.
After response:
My appology for misunderstanding the importance of the results. I rise my score to 7.

__ Correctness__: I am not very familiar with replica method and the appendix (27pages) is too long for me to check the proofs carefully in a very shot time. I applogize that I can not be certain whether the proofs are correct.

__ Clarity__: It is clear to read the main paper.

__ Relation to Prior Work__: The discussion of the related work looks good to me. One suggestion is to cite Approximate message passing for amplitude based optimization or Optimization-Based AMP for Phase Retrieval: The Impact of Initialization and Regularization, both of which are using AMP algorithm to solve phase retrival problem and thus closely related to the topic of this paper.

__ Reproducibility__: Yes

__ Additional Feedback__: I think this paper (if the proofs are all correct) is somewhere between 6-7. It would be 7 if all the results are rigourous or there is a unified framework for the analysis of weak and strong threshold. It would be a 8 or 9 if both of them are resolved.

__ Summary and Contributions__: This paper extends, generalizes, and unifies known results for phase retrieval in high dimensions. A general conjecture is presented that provides a single letter characterization of the free energy. This conjecture is proved rigorously in certain cases. The G-VAMP, conjectured to be optimal in many settings, is employed to estimated the signal. From this the authors are able to determine algorithmic thresholds for both weak and full recovery and determine information theoretic thresholds for full recovery.

__ Strengths__:
Soundness: the theoretical and empirical results appear sound to me
Significance and novelty: phase retrieval is a significant problem. The main novelty is combining a variety of existing results in a common framework. And then introducing new results based on the common framework. These new results come in two forms: information theoretic thresholds for full recovery based determined rigorously by proving the conjecture under different conditions and algorithmic thresholds for weak and strong recovery based on the analysis of the G-VAMP algorithm.
Relevance to NeurIPS: definitely relevant

__ Weaknesses__:
Soundness: no weakness that I can determine
Significance and novelty: no weakness that I can determine
Relevance to NeurIPS: no weakness

__ Correctness__:
The proofs are in a 27 page appendix. Having said that the theoretical results are believable because they follow a common recipe for the replica computation and
the interpolation method for proving correctness of the replica computation.

__ Clarity__:
The paper is well written. Having said that it is a challenge to read given the abundance of notation and the variety of results mentioned.

__ Relation to Prior Work__:
Yes, prior work is discussed and compared to this work. As mentioned above this paper attempts to consolidate known results in a general framework and then extend those results.

__ Reproducibility__: Yes

__ Additional Feedback__:

__ Summary and Contributions__: The authors discuss the fundamental phase retrieval problem where for i=1,..,m Y_i=|<\Phi_j,x^*>| where \Phi_j, x^* are n-dimensional. They study the statistical and algorithmic reconstruction problem of x^* under various random generating assumptions on the model. The setting is when m/n=\alpha (constant) and m,n diverge to infinity.
The authors first perform the non-rigorous replica method which gives rise to a conjectured on the optimal reconstruction of x^*. This conjecture applies to a large family of sensing matrices \Phi (real and complex) - much bigger from existing results- and separable priors on x^*. The authors prove the conjecture in two relatively general cases for the generative models of \Phi and x^*, yet far from the whole family the conjecture is applied to.
The authors use the conjecture to derive the full-recovery IT limit of the problem (min \alpha for which full recovery of x^* is possible) as well the the algorithmic weak-recovery (min \alpha for which G-AMP achieves positive correlation with x^*). The latter result is possible as in this regime G-AMP performs gradient ascent in the replica potential started from zero. Finally, the authors provide simulations in which, among other remarks, an interesting all-or-nothing phase transition seems to be appearing.

__ Strengths__: I think the paper is very interesting for the following reasons.
1. The replica-method is a heuristic that has been traditionally applied under a Gaussian assumption on the sensing matrix. While the authors are not the first who apply it in a non-strictly gaussian setting for phase retrieval, they apply it in the complex case under minimal assumptions on the spectrum of the sensing matrix.
2. The authors partially prove it, which appears to be a non-trivial task, even in the restricted setting they consider. Naturally, they are based on recent proof of predictions from the replica formula.
3. The authors show that (based on the Conjecture) there is potentially a statistical-computational gap in this generic setting for the sensing matrix. This is novel and interesting as compared to the standard case people have studied where the matrix is considered Gaussian.

__ Weaknesses__: 1. I think the paper is not very clearly stating which results are rigorous and which are not. For example, I think it will be really nice if the authors explicitly mention that their derivations on Sections 3,4 are based on the Conjecture from the replica method and not any theorem (that is correct, right?) Furthermore, mentioning clearly which results are rigorous and which are not will help -hopefully - the future researchers in the field when they are reading the present work. I am also confused on this about Table 1, and the cases where \Phi=W_1W_2. Is W_1, W_2 random iid Gaussian here? If yes, how does this connect with Theorem 2.2. where W_2 needs to be fixed?
2. In the figures, saying "a real column-orthogonal matrix" does not read nicely. Please provide some details in the caption.
3. I am puzzled in the simulations what is the prior distribution. Is it a complex Gaussian when the matrix is complex and real otherwise?
4. On that note, in the abstract \alpha provides a distinction between the two fields: real and complex, something that becomes \beta in the main text.
5. I would suggest avoiding the use of the wording "the algorithmic threshold", as we have no guarantee that G-AMP is the optimal polynomial-time algorithm, right?
6. While the authors discuss extensively the \alpha_{WR,Alh} and the \alpha_{FR,IT} I am confused why they did not discuss about the \alpha_{WR,IT}? I think the paper will be more complete with such a study and discussion and given the Conjecture possibly such a characterization is not hard to achieve.
7. I got lost when reading the paper whether the authors *prove* that there is a gap between the algorithmic recovery threshold and the IT one. If yes, could they provide an explicit discussion under which assumption they prove or conjecture it to be true?

__ Correctness__: As far as I have checked, the paper seems to be correct. As mentioned above, I think it will strongly benefit by clearly mentioning which parts are rigorous and which are not.

__ Clarity__: The paper is clear to me, modulo the above comments.

__ Relation to Prior Work__: I think the paper is clearly discussing the relation to prior work.

__ Reproducibility__: Yes

__ Additional Feedback__: A comment I wanted to make is about the "all-or-nothing" phase transition mentioned in line 89. This is potentially a very interesting finding, as to the best of my knowledge albeit a recent line of work initiated by [Gamarnik et al COLT 2017, Reeves et al 'COLT 2019] on all-or-nothing for sparse linear regression, the phenomenon has never been established in a "dense" setting like the present one. I think it will be very interesting for the authors to mention more clearly under which assumptions they see this transition (on the prior of the signal, sensing matrix etc) as this can be actually another contribution of the present work. Of course, an extensive study of this can be a topic of a future project.
Finally a few corrections:
Line 32, "fixed" instead of "finite".
Line 44, "polynomial-time" instead of "polynomial"

__ Summary and Contributions__: The authors analyze the fundamental limits of phase retrieval under large random matrices, both real and complex-valued. They also analyze G-VAMP algorithm in the same context and show a few empirical performance results.

__ Strengths__: The rigorous performance analysis is impressive. It generalizes several previous results and thus extends the frontier of theoretical understanding about phase retrieval with large random matrices.

__ Weaknesses__: The theory applies to large random matrices and iid signals, not the Fourier matrices and natural images that one encounters in practice. It is encouraging that the empirical experiments show decent performance for randomly sampled Fourier matrices and iid signals, but I don't expect that these results will extend to natural images.
For these reasons, and because phase retrieval is not of mainstream interest at Neurips, I worry that not too many Neurips attendees will be interested in this work. It may be better appreciated at a theoretical conference.

__ Correctness__: Yes, the claims appear to be correct, although I did not check the proofs in detail.

__ Clarity__: Yes, the paper is well written.

__ Relation to Prior Work__: Yes, the authors do a good job of discussing how their work differs from, and improves upon, existing papers.

__ Reproducibility__: Yes

__ Additional Feedback__: ** I have read the other reviews and the author's response, and my opinion remains the same. **