The paper discusses generalization bounds based on information-theoretic notions. The techniques developed here may offer some advantages over the recently-proposed conditional mutual information (CMI) approach; in particular, they compose the CMI technique with the "chaining" technique to recover the known sharp generalization bounds in agnostic learning with VC classes, whereas it is still open whether such sharp bounds are achievable via CMI directly. The reviewers all favor acceptance. A reviewer did find a mistake in the paper, which the authors acknowledge in their response, and propose a fix; however, the reviewers have other suggestions for better corrections in their post-rebuttal updates. The reviews also note that a couple of the results are known or follow from known results (though others are new and interesting). Reviewers also noted a lack of concrete examples where these techniques offer provable advantages over all existing analyses.