Sun Dec 8th through Sat the 14th, 2019 at Vancouver Convention Center
Summary: This paper proposes a semantic parsing and program synthesis method. Code generation relies on low-level and high-level abstractions. High-level abstractions can be thought of as functions that are re-used in several programs. In order to model high-level abstraction, the authors propose using a code-idiom mining method from the literature. Once the code idioms are extracted, the program is generated. The generative process has the capability of spitting tokens or idioms. Strengths: The paper is well-written and well organized. The motivation is stated clearly and contrasted well against the literature. The experiments seem satisfying, although I have a follow-up question that I will state in the next section. The proposed model is novel and seems to be suitable for the problem at hand. However, I do have a follow-up question about the proposed two-step model that I will state in the next section. Weaknesses: I did not understand what the baseline is. How does the decoder in section 4 different from the proposed model? Also, one would expect to see the performance compared to other similar methods stated by the authors [12,23] that rely on sketch generation. Since this proposed model is closest in the high-level idea to these methods. How is the idiom mining part of the model evaluated? It seems like the idiom mining step introduces a point of failure into the model that cannot be trained end-to-end. Does the authors have any comments regarding this? Have they performed any analysis of the possible failures that are a result of this step? This reference seems to be missing: Riedel, Sebastian, Matko Bosnjak, and Tim Rocktäschel. "Programming with a differentiable forth interpreter." CoRR, abs/1605.06640 (2016). It learns program sketches and then fills in the sketch. --------------------------------------- Post rebuttal update: I have read the response and maintain my accept rating of 7.
The paper is written well and easy to follow. Interesting iand novel idea to use code-idioms for program synthesis tasks. Mining the idioms use existing bayesian non-paaraa The paper highlights its advantage over other sketch generation works, due to its capability to generalize with few code fragments instead of grammatically correct complete intermediate template. Experimental analysis comparing the two would have provided more insights. The time complexity especially of idiom miner step should be added.
This paper proposes a new framework which utilizes idiom mining to improve existing models for semantic parsing. The authors argument that one of the main challenges of synthesizing a program is the insufficient separation between the high-level and low-level reasoning, which forces the model having to learn both the common high-level abstractions and the low-level details that specify the semantic meaning of the generated program. They propose to frame idiom mining as a nonparametric Baysian problem and first learn a set of code idioms, which are AST fragments of the programs, from the training set. The mined idioms are then used to train the generation model which teaches the model to make use of common code snippets and not just generating the program by predicting the AST node by node. The main idea presented in the paper appears to be interesting and insightful, particularly for semantic parsing where access to parallel corpus is often limited. The paper is clearly presented and relatively easy to follow. Experiment results on two benchmark datasets Hearthstone and Spider show noticeable improvements over the selected baselines when the proposed method is used. Although no results on the current state-of-the-art were mentioned in the paper.