The reviewers agreed that the paper makes a significant contribution to the interesting area of length of proofs / explanations for in social choice. There was a lengthy discussion among the reviewers, largely centering around the normative appeal of axioms and whether the results have the potential to impact explainable ML/AI. The latter point was debated most; let me attempt to summarize. On the one hand, for an end user who wishes an alternative to be chosen according to a fixed set of axioms, then naturally this end user would prefer an explanation solely in terms of those axioms. The paper then bounds the length of these explanations. In particular, showing why a voting mechanism like Borda selected an alternative X would not be sufficient. The explanation should explain why alternative X was chosen, full stop, not why it was chosen by a particular rule; as such, if one invoked the score totals from Borda, one would also need to give the proof that Borda satisfied the given axioms. As the proof that Borda satisfies the axioms is itself quite lengthy, the axiomatic explanation from first principles is much preferred. On the other hand, one may question the likelihood of having such an end user. The normative appeal of some of the given axioms is debatable, especially to the point where an end user would have zero hesitation accepting the steps of the explanation, and the debatable appeal of axioms seems to be pervasive in voting mechanisms. (One may also point out that Arrow's axioms also have normative appeal, which is perhaps why his impossibility result is so famous.) And if the end user did not feel strongly about the axioms, then in lieu of an axiomatic explanation, they might prefer something of the form: "One can show that Borda satisfies the following axioms, which are reasonable, and here are the score totals which led to alternative X being chosen". For Borda, the "proof-by-implementation" explanation is short, though for other voting mechanisms, even this explanation may be longer than a strictly axiomatic one; we could not think of an example, however. A version of the above debate will no doubt take place in the minds of many readers of this paper, especially those from an ML background. As such, we encourage the authors to make a more direct case case for why the explanations studied, and therefore the results given, are practically relevant for the ML community. As a minor note, it took me some time to understand the numbers in Fig 1; perhaps adding "#" to the top row of the tables, or writing "4x" instead of "4", and referencing the extra symbol in the caption, would shorten the time to comprehension for readers.