This paper considers temporal positive-unlabeled learning to predict future connectivity on a dynamic attributed graph (i.e., the hypothesis generation problem). The major contribution is to transform the problem of interest into PU learning. The proposed method is shown to be applied on COVID-19 data! The clarity and novelty are clearly above the bar of NeurIPS. While the reviewers had some concerns on the experiments, the authors did a particularly good job in their rebuttal. Thus, all of us have agreed to accept this paper for publication! I have two additional comments. First of all, the learning objective used in the paper is not originally from . Indeed, it is from [M. C. du Plessis, G. Niu, and M. Sugiyama. Convex formulation for learning from positive and unlabeled data. ICML 2015]; see the unnumbered equation above Eq. (3) of this paper. This paper is mostly cited together with  and , especially this is what you are really using in 3.1 Model Design, but it seems that you missed this reference. Then, as pointed out by R2, the irreducibility assumption is an issue of PU class-prior estimation. There is a paper on arxiv entitled "towards mixture proportion estimation without irreducibility" which is as far as I know the latest result and can be used to estimate PU class prior when the irreducibility assumption fails to hold in practice. PS, unbiased PU should be shorten to uPU and non-negative PU should be shorten to nnPU (at least in the nnPU paper which is the first paper using these two names). UPU and NNPU look like two new problem settings named unlabeled-positive-unlabeled and negative-negative-positive-unlabeled classification... PS2, for the name "Marthinus Christoffel du Plessis" (who is the first author of [9,12,13]), Christoffel is his middle name but not last name. The short version should be "M. C. du Plessis" as shown above or "du Plessis, M. C." where du is small even if at the beginning of a sentence (as a rule of Afrikaans).