Part of Advances in Neural Information Processing Systems 32 (NeurIPS 2019)
Santosh Vempala, Andre Wibisono
We study the Unadjusted Langevin Algorithm (ULA) for sampling from a probability distribution ν=e−f on \Rn. We prove a convergence guarantee in Kullback-Leibler (KL) divergence assuming ν satisfies log-Sobolev inequality and f has bounded Hessian. Notably, we do not assume convexity or bounds on higher derivatives. We also prove convergence guarantees in R\'enyi divergence of order q>1 assuming the limit of ULA satisfies either log-Sobolev or Poincar\'e inequality.