__ Summary and Contributions__: This paper presents another graph embedding technique based on hyperbolic spaces and convolutions. The topic is interesting, but has been very widely investigated over the recent years. Contrary to the already rich literature on graph embeddings (as well as hyperbolic spaces), this paper utilizes jointly hyperbolic and euclidean embeddings.
Basically, the proposed recipe seems to be the following:
1) Apply message passing in the hyperbolic and euclidean space(using a GAT -- an attention mechanism)
2) Transform the Euclidean features to hyperbolic space
3) Merge the features in an adaptive way.

__ Strengths__: 1) It is the first work to try to leverage both the Euclidean and hyperbolic features.
2) It remains scalable. Although, it would have been interesting to see some running times -- the mobius multiplication, and operating in non-Euclidean space can be more costly than euclidean.

__ Weaknesses__: 1) __Over explored topic, and ``okay`` results__ The topic (graph embedding, hyperbolic spaces) has been very well explored of late. As such, this paper suffers from the huge difficulty of making a case for yet another graph embedding technique: what justifies so many transformations and back and forth between spaces? The results that the authors show are very competitive and seem to beat SOA methods. While the accuracy seems higher, it is not necessarily statistically significant in many of the presented examples. It would have been interesting to make a stronger case for these results by adding running time, or other performance metrics.
2) __Few experiments:__ The authors present only two sets of experiments (on several datasets, but the experiment is the same): node classification and link prediction. It would have been interesting to try out other task, such as graph classification for instance.
3) Lack of code
######### Rebuttal Update ####
After reviewing the authors' rebuttal and going through their submission, I have decided to increase the score of their submission. I have revised my opinion: while I maintain that this is an overly explored topic (and I wonder if the sample size necessary to obtain good results is reasonable enough to make the method practically usable in real life, to tackle medical applications for instance, as the authors mentioned), the authors' method does provide some improvement upon state-of-the-art method, and nicely extend the discussion in this particular area.

__ Correctness__: No problem detected.

__ Clarity__: Yes.

__ Relation to Prior Work__: Yes.

__ Reproducibility__: No

__ Additional Feedback__:

__ Summary and Contributions__: This paper proposes GIL, a new type of graph neural networks (GNNs) that are suitable for graphs without an underlying constant curvature for which no computationally efficient Riemannian manifold could be used. The main idea is to leverage both the power of Euclidean and hyperbolic geometries and corresponding GNNs. The proposed model trains in parallel two GAT networks, one Euclidean and one hyperbolic, followed by cross interactions between their embeddings that is meant to encourage the interaction between the two geometries. Finally, the resulting embeddings are fed through Euclidean and hyperbolic softmax MLR layers for node and edge classification tasks. Carefully worked experiments on 5 datasets and 2 tasks show consistent and significant improvements w.r.t. the baselines. Ablation studies are also nicely done.

__ Strengths__: See above.

__ Weaknesses__: - My main concern is regarding the reproducibility of this method. This is a heavy black box model. Implementation details are almost entirely missing (even the number of layers is not specified). No code is currently published. Will the authors publish the full code and evaluation settings that would allow full reproducibility of these results ? I will be willing to increase my current rating if this issue would be addressed.
- Claimed contribution in lines 77-78: the authors do not mention, cite or use as baselines any of the recent works on products of constant curvature spaces, e.g. [1,2,3]. Technically, these methods could jointly learn both Euclidean and hyperbolic representations. How does the proposed model compare against GNNs that embed graphs in products of constant curvature spaces with fixed or learnable curvatures, e.g. [2] ?
- Line 32: "although they may suffer the problem of permutation variance " --> as far as I am aware, none of the hyperbolic GNN models suffers from this problem. Can the authors detail this statement?
- line 119: another option is to use gyro-midpoints or Riemannian Frechet means, e.g. see [2,4].
- lines 158-160: "It is worth mentioning that after the feature fusion, the node embeddings [...] maintain properties and structures of the original space." -----> This reads like a vague statement to me. Can you please clarify why does it preserve those ?
- line 252: "We evaluated all methods over the same random seed for 10 runs" --> why do you observe any differences if you use the same random seed ?
[1] Learning mixed-curvature representations in product spaces, Gu et al, ICLR 2019
[2] Constant Curvature Graph Convolutional Networks, Bachmann et al, ICML 2020
[3] Mixed-curvature Variational Autoencoders, Skopek et al, ICLR 2020
[4] Differentiating through the Fr\'echet Mean, Lou et al, ICML 2020
================================
LE (after rebuttal): the authors partially addressed my comments. I will increase my score. I am still not sure how this work compares to embeddings of graphs in products of constant curvature spaces that have an overall variable curvature.

__ Correctness__: See above. This is largely a deep learning paper, there is no theory. The empirical methodology seems valid.

__ Clarity__: Paper is clearly written.

__ Relation to Prior Work__: Please see some missing citations above in the "Weaknesses" section.

__ Reproducibility__: No

__ Additional Feedback__: The current method is a heavy black box model. No code is currently published. Will the authors publish the full code and evaluation settings that would allow reproducibility of their method ?

__ Summary and Contributions__: The paper proposed a new method to address an important problem for geometric embedding. The authors argued that single geometric embedding lacks the ď¬‚exibility to model complex geometries on graphs. To address this issue, this paper proposed a new method which can make an interaction learning from both Euclidean and hyperbolic geometries for graphs with complex geometries. The experimental validation demonstrates the significant improvement compared with baselines.

__ Strengths__: 1. Motivation. This paper presents an important problem on geometry interaction learning on graphs. The existing works only consider single geometry embedding, such as Euclidean or hyperbolic, which lack the ď¬‚exibility to model complex geometries and cannot fully capture the geometric information available in graphs. While this work tries to capture both geometric features simultaneously in a graph, and gives each node the power to decide which geometric features is more reliable for the downstream tasks.
2. Novelty. The authors provide a novel framework that unifies Euclidean and hyperbolic geometries, two previously disparate research areas, utilizing the strength of both two geometries and maintaining the geometric properties of each space, directly from two essential parts, interaction learning and probability assembling. It is a significant contribution for moving towards a more general setting in graph geometry learning, without knowing any prior structural information.
3. Soundness. The extensive theoretical analysis and empirical evaluations support major claims in the paper, with SOTA results and convincing experimental analysis. And the further visualization also shows the interpretability of the learnable embeddings and different weights.
4. Relevance. There has been an increased interest in geometric deep learning on graphs. The significant and contributions of this paper are undisputed on this domain.

__ Weaknesses__: Overall, I found the paper interesting and practically useful; although I believe some additions to the empirical evaluation can improve the impact of the paper if there is any space left, such as the low-dimensional modeling ability of the proposed method, along with some experimental details, including the search space of hyper-parameters and the detailed description of the compared methods.

__ Correctness__: The technical content of the paper is correct and major claims are well supported by the theoretical analysis and empirical evaluations. There is an unclear part for the optimizer. Because there are several parameters involving both Euclidean and hyperbolic geometries in this paper, such as Euclidean parameters W in Eq. (8), a in Eq. (10), \beta in Eq. (12), and hyperbolic parameters b in Eq. (8). As shown in the experimental settings, the Adam optimizer is selected to train the model. I think that the authors transfer the hyperbolic parameters to Euclidean parameters as the trainable objects to use the optimizer uniformly. This part should be clarified for correctness.

__ Clarity__: Yes. This paper is well organized and easy to follow. The notations are consistent throughout the document, with necessary description clearly stated. The model structure depicted in Figure (2) gives a clear and intuitive picture of the overall framework.

__ Relation to Prior Work__: Yes. Authors provided the relevant related works and elegantly show how their work improves on the existing literature in Introduction section. The existing works only consider single geometry embedding, either Euclidean or hyperbolic, while this work presents an interaction learning from both Euclidean and hyperbolic geometries for modeling graphs, which is a brand-new attempt in geometric deep learning.

__ Reproducibility__: Yes

__ Additional Feedback__: In reality, many complex graphs may exhibit more than one geometric structure, and we cannot know in advance which geometric features are more appropriate for the nodes in graph representation learning. And the paper introduces this significant problem and then proposes a comprehensive geometry interaction learning method DIL in response to the problem.
In the DIL, the different geometric features can be adaptively updated in local space and then conformally adjusted from dual space, which is not a trivial attempt between two different geometrics with independent close-form operations. Meanwhile, the intuitive interaction manner EucHyp is not ideal as the results shown in Table 2. In my view, this is an effective attempt to address a meaningful problem.
The paper is generally well-written and structured clearly. However, there are still some issues that need to be clarified.
1)The values of hyperbolicity distribution are supposed to denote the proportion of corresponding hyperbolicity, which should be stated clearly.
2)For the experimental settings, the authors basically follow the same settings in related works. However, it would be beneficial for readers to compare and verify if details of settings could be attached to the additional material, such as search space of hyperparameters.

__ Summary and Contributions__: This paper proposes a graph geometry interaction learning method which allows to embed a graph into a Euclidean space and a hyperbolic space simultaneously. More specifically, mapping between two spaces, distance-aware attention mechanism for hyperbolic propogation, and a feature interaction learning module are mainly explored. Experiments around node classification and link prediction shows promising results.

__ Strengths__: The theoretical grounding such as Hyperbolic geometry, graph attention networks and extended improved study such as geometry message propagation, probability assembling are clear and reliable.
The visualization of Euclidean and hyperbolic embeddings is intuitive.
The topic is around improving GNN to extract more features and is relevant to NeurIPS community.

__ Weaknesses__: Explanation around why applying exponential and logarithmic mapping features and feature interaction can be proved in detailed.
The model can be seen as a combination of an Euclidean and a hyperbolic model. Novelty is limited. Also, the model performs an online ensemble of these two models. What if the two models are trained offline, before being combined by a weight hyperparameter?

__ Correctness__: Yes.

__ Clarity__: Yes.

__ Relation to Prior Work__: Yes, the proposed interaction learning aims to collect the features from hyperbolic and Euclidean spaces, and discussion about prior works are well organized and clarified.

__ Reproducibility__: Yes

__ Additional Feedback__: The authors address my concerns during the rebuttal period. The scores are maintained the same.