This paper presents new PAC-Bayesian risk bounds that remove some of the assumptions/limitations of traditional PAC-Bayesian analysis. Namely, the new bounds allow for data-dependent priors (which have recently been shown to drastically improve tightness) and unbounded loss functions (which arise in, e.g., regression problems). While prior work has addressed both of these issues individually, the paper presents a general framework that, when instantiated, improves on some prior bounds (e.g., better constants for data-dependent priors). The paper also clarifies the distinction between "data-dependent" and "distribution-dependent" priors, which may help the broader ML community better understand these concepts. The reviews are somewhat mixed, but lean toward acceptance. The reviewers do highlight some key concerns, which I hope the authors will address when revising the paper. 1) The bounds are somewhat non-standard in that they assume a fixed learning algorithm (in this case, a kernel) that, given a dataset, maps to a posterior distribution on the hypothesis space. The standard PAC-Bayesian theorem states that, "w.h.p. over draws of data, <bound> holds simultaneously for all posteriors." The paper's main bound says that, "for any stochastic kernel, w.h.p. over draws of data, <bound> holds for the resulting posterior." As R2 points out, this does not hold uniformly for _all_ posteriors; it holds for any posterior that can be output by the kernel. I suspect that this is a weaker statement than traditional PAC-Bayes bounds. The reviewers and I would like the authors to acknowledge and discuss this in the paper. 2) As R1 points out, and the authors agree, "there is a relationship between the concentration of the loss and the concentration of the covariance eigenvalues." Since prior work has approached unbounded losses by assuming concentration of the loss, this may weaken the novelty of the results. The authors should at least discuss this in the paper. 3) Some relationships to prior work could better addressed (see reviews for specific citations). Beyond the above, I strongly the encourage the authors to incorporate _all_ feedback from the reviewers when revising the paper.