Summary and Contributions: In this paper, the authors proposed a new method for reducing discrete integration problems to simpler model counting problems, that relaxes the restriction of dyadic weights. The proposed method is evaluated in the context of benchmarks from neural network verification, and the effectiveness of the method is validated.
Strengths: The idea of reducing discrete integration to model counting is not new. However, the contribution of relaxing the dyadic weight restriction is significant, and the need for pre-processing steps (which could also result in poor approximations) is lifted. The theoretical aspects of this work seem solid and provide rigorous guarantees of the proposed method.
Weaknesses: Unfortunately, I cannot comment on the scientific contribution as well as the weakness of the paper, as I do not possess the expertise to judge the theoretical results accurately. My expertise is so outside of this field that I will rely on the judgement of the other reviewers, whom I hope will have more experience and will better know the literature.
Correctness: I cannot judge the correctness of theoretical results since I am not an expert in this field.
Clarity: Yes.
Relation to Prior Work: Yes.
Reproducibility: Yes
Additional Feedback:
Summary and Contributions: This paper builds upon the WMC to MC reduction presented in [1], proposing a technique that lifts the restriction of the original work to dyadic weights. Leveraging an existing (epsilon,delta)-approximate model counter ApproxMC4 [2], the resulting (epsilon,delta)-approximate WMC solver, dubbed DeWeight, is tested on benchmarks arising from neural network verification [3].
Strengths: Although the proposed technique is evaluated on formal verification tasks only, the problem is relevant for a much larger number of applications. The proposed technique effectively addresses the limitations of a previously proposed approach.
Weaknesses: I think that the paper could be more explicit in describing the limits of the proposed approach. My main criticism regards the empirical evaluation, which is In my opinion, it could be improved (see detailed comments).
Correctness: The approach looks correct, although I am not entirely convinced by the empirical evaluation of the solver (see detailed comments).
Clarity: Yes.
Relation to Prior Work: Yes.
Reproducibility: Yes
Additional Feedback: I think that the proposed reduction technique is a neat improvement over [1], although I find it difficult to assess the significance of this contribution. Given the generality of DeWeight, I wonder why the authors evaluated it only on benchmarks from [4]. I am aware that WISH-like approaches suffer the lack of XOR support in MaxSAT solvers and that some (epsilon,delta)-approximate WMC solvers do not natively support projected MC, but the choice of benchmarks and competitors is rather restricted and makes it hard to answer the ultimate question "How good is DeWeight at WMC?". Is there a reason for not including [4] in the comparison? After reading the paper I felt that a number of questions were left unanswered: is DeWeight a state-of-the-art solver for WMC or should I use it only when my problems can be projected on few Boolean variables? How well does the proposed reduction scale with respect to existing WMC approaches when the weights are arbitrary rationals and not in {0.1, 0.2, ..., 0.9}? Since very often the priors are learned from data, why not sampling the weights uniformly from [0,1]? Finally, "In particular, recent work demonstrated that determining properties of neural networks, such as adversarial robustness, fairness, and susceptibility to trojan attacks, can be reduced to discrete integration." I think that it applies to NNs with discrete inputs only, such as the binarized NNs from [4]. I wish the paper was more explicit in describing the limitations of the approach. --- [1] Chakraborty, Supratik, et al. "From weighted to unweighted model counting." Twenty-Fourth International Joint Conference on Artificial Intelligence. 2015. [2] Soos, Mate, Stephan Gocht, and Kuldeep S. Meel. "Accelerating approximate techniques for counting and sampling models through refined CNF-XOR solving." Proceedings of International Conference on Computer-Aided Verification (CAV). Vol. 7. 2020. [3] Baluta, Teodora, et al. "Quantitative verification of neural networks and its security applications." Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 2019. [4] Chakraborty, Supratik, et al. "Distribution-aware sampling and weighted model counting for SAT." Twenty-Eighth AAAI Conference on Artificial Intelligence. 2014. --- I thank the authors for their feedback. The size of the tilt clarifies why [4] was not considered on that particular experiment. I too agree that the small tilt assumption is not realistic and I wish that this aspect was mentioned in the text. DeWeight makes assuptions on the weight function too. The results on the NN verification benchmarks are impressive but the setting feels carefully chosen to exclude a priori any other competitor (other than Dyadic). I agree that 1000+ formulas represent a fairly large benchmark, but they come from the same application domain (thus having similar structural properties) and the choice of weights is rather limited. I still think that the empirical evaluation could be extended to other settings (even synthetic ones) and include other solvers. Some natural questions to ask: how does DeWeight relate to existing WMC approaches when the weights are arbitrary? When the reduction to (unweighted) MC can be leveraged in practice? The discussion of the proposed approach tends to overlook these limitations. After reading the rebuttal I slightly raised my score. Provided that the authors clarify the limitations of the proposed technique, this submission could be accepted with the current experimental section. Yet, I hope that the authors will provide more empirical evidence of the merits and limitations of the proposed reduction.
Summary and Contributions: 1. it builds on the paradigm of reducing discrete integration to model counting and propose a reduction that relaxes the restriction to dyadic weights from prior work. 2. it addresses this scalability challenge via an efficient reduction of discrete integration to model counting.
Strengths: It proposes a reduction that relaxes the restriction to dyadic weights from prior work with theoratical guarantees and with some empirical evaluation.
Weaknesses: 1. This work claims its much better scalability than previous work. But the expriments itself is not large-scale. 2. This work claims that it's useful for neural network verification domains but I don't see experiments on these domains. 3. Baseline methods are a little few and experiments here are not enough to support its argument.
Correctness: I am not quite sure.
Clarity: I think there is still much room for the authors to refine their paper.
Relation to Prior Work: Yes.
Reproducibility: Yes
Additional Feedback: