This paper was highly regarded by all reviewers. I looked at the paper myself because of a possible conflict between the claimed lower bound results and formal limitations on the measurement of mutual information proved in [McAllester and Stratos, AISTATS, 2020] which the authors cite. The paper emphasizes that ML-CPC improves the upper bound on MI estimator from log(n) in traditional CPC to (roughly) log(n^2) (for modest m and alpha at the lower end of its range). This is in rough correspondence with the the completely general upper bounds on MI estimation of McAllester and Stratos. However, a failure to prove a tighter upper bound on ML-CPC does not imply that no tighter upper bound exists. A more convincing, and perhaps enlightening result would be to demonstrate a case where ML-CPC achieves this upper bound --- that a lower bound on MI of size log n^2 is actually achievable. But the empirical results are strong in any case.