This paper proposes a new method to stabilizing GAN for language generation. After author rebuttal and reviewer discussion, the scores are still divergent. By the end, it received 2 reject and 2 accept recommendations. On one hand, the main criticism about this paper lies in the existence of many hyper-parameters/tricks to tune. On the other hand, the reviewers appreciate the additional clarification and experiments in the rebuttal, and think this paper provides a careful and insightful analysis on text GANs. Typically in the text GAN papers, people only perform simple experiments on unconditional text generation using LSTMs in a kind of "toy" setting. In this paper, the authors have used SOTA pre-trained language models, and worked on question generation and abstractive summarization tasks. It is non-trivial to make the proposed method improve over strong MLE pre-trained models (T5 and BART). On balance, the AC recommends accepting the paper. The authors are encouraged to consider the reviewers' comments when preparing the camera-ready version.