Paper ID: | 5712 |
---|---|

Title: | Variational Graph Recurrent Neural Networks |

This paper studies a Graph RNN model for dynamic graphs. Cardinalities of nodes and edges can be time-varying. Especially the proposed VGRNN is made for highly variable graph sequences. The hidden state h_t, which is tracked via RNN function, governs the prior of latent variables and the sampled latent variable controls the generation of time-varying adjacency matrices. Such hierarchical modeling allows the proposed VGNN to fit to highly time-variable graph sequences. I had been working on modeling dynamic graphs for years. Modeling is rigorous but somewhat complicated. It forces me to read the entire paper multiple times for understanding. Possibly very difficult for non-expert readers to understand the core formulation. I have two major concerns about this paper. As written in L96, the proposed model is flexible enough to allow the cardinality of node and edge sets change across time steps. In such a dynamic graph, deletions and additions of nodes are always problematic for formulations and implementations. How does the VGRNN model deal with such issues? Unfortunately, I cannot find explanations concerning these issues in the current manuscript. For example, assume at t=0 there are 5 nodes in the graph. Then, consider the 3rd node disappears at t=1, and the 6th (new) node appears at t=2. In such a situation, an appropriate model needs to guarantees that the deleted 3rd node does not affect the computation after t=1, and the added 6th node does not affect the inference before t=2. At the same time, the sizes of adjacency matrix may change over time so it is not trivial to maintain identities of nodes. It is great if the authors can detail how to resolve these issues. Another concern is insufficient discussions about the existing works. In the main manuscript, the discussions about the existing works are limited to the ones very close or direct competitors (there are some discussions in the supplementary material, though). I do understand the tight page limitation of NuerIPS submissions, but the current main manuscript is obviously not sufficient to place the proposed work in the dynamic graph analysis literature. The reference mainly focuses on the node embedding approach, but the scope of dynamic graph analysis is broader. For example, node embedding is not the sole approach for dynamic link detection/prediction problems, right? Why this paper focuses on the node embedding approach? Pros and cons of node embedding, aginst other approaches (such as classical, non-DNN approaches)? + Governing the prior parameters by latent RNN factors sounds reasonable for modeling highly-variable dynamic graphs + Experimental performance seems nice - Not clear how the VGRNN treats the dynamic addition/deletion of nodes - Insufficient discussions with related works ### after author feedback ### I read the feedback letter and other reviewers' questions. In the letter L13 says "the size of A and X can in time but their latent space maintains across time." This partially addressed my concern conceptually, but raised another practical question. If the size of A and X are t-dependent, some components in your model formulation are difficult to implement because they accept size-variable matrices. Especially, Eq.(1), Eq.(4), \mu_{enc}^{(t)} and \sigma_{enc}^{(t)} in Eq. (6) should be described more concretely. Thus I determined to keep my score.

Clarity: There exists some typos, e.g., ''graph convolutional neural network (GCRN)'' in line 34. Please proofread the paper. Originality: Though RNN with stochastic latent variables and GRNN has been studied extensively, there is little work combing these two ideas to the best of my knowledge. Quality: This paper claims that high level latent random variables of VGRNN are vital (i.e., interpretability and uncertainty) for modeling complex dynamic graph. Experiment results of AUC and AP support the claim partially. However, there are no experiments discussing the interpretability of latent variables. Significance: VGRNN does contribute to understanding and solving dynamic graph modeling compared to previous methods. ---------------------------------- After Rebuttal: Thanks for your response. The visualization of latent space addresses my question partially.

Originality: the effort to predict links in dynamic graphs through a combination of variational autoencoder, graph neural network and recurrent neural network is the interesting contribution of the paper. I have some doubts about the formulation that subtracts from my evaluation of originality. - As I understand the paper, X represents node attributes, A represents dynamic network, Z represents the code of all X and A up to the current time, and h represents a latent state of all X, A and Z up to the current time. Then what additional information does Z_t provide in Eq. 4, since in Eq. 2 Z_t contains the same amount of information as h_{t-1}? - In Fig. 1 (b), the code Z_t is only used to generate the dynamic network A_t, not the node attributes X_t. This is not symmetric if we consider that Z_t encodes both X_t and A_t in Fig. 1. - In the formulation, Z_t has the same dimensionality as X_t, meaning that Z_t is a code for each node. This doesn't necessarily need to be so. - In the experiment, I would like to see more detailed analysis of what GCRN gives us, in comparison with other neural network algorithms, as well as non-neural-network algorithms. Significance: This paper has significance as an application paper. But for application paper, I would like to see deeper analysis of what GCRN gives us in comparison with other algorithms. In particular, some visualization and comparison at the link level might be very helpful. The variational solution of evidence lower bound is pretty standard. ### Thanks for answering the key questions in the serious rebuttal! While I am still not totally convinced about novelty, I believe the authors will seriously revise of the submission and address reviewers' concerns if it is accepted. As a result, I would move my overall score to 6.