Open Access
November 2019 High-dimensional Bayesian inference via the unadjusted Langevin algorithm
Alain Durmus, Éric Moulines
Bernoulli 25(4A): 2854-2882 (November 2019). DOI: 10.3150/18-BEJ1073

Abstract

We consider in this paper the problem of sampling a high-dimensional probability distribution $\pi$ having a density w.r.t. the Lebesgue measure on $\mathbb{R}^{d}$, known up to a normalization constant $x\mapsto\pi(x)=\mathrm{e}^{-U(x)}/\int_{\mathbb{R}^{d}}\mathrm{e}^{-U(y)}\,\mathrm{d}y$. Such problem naturally occurs for example in Bayesian inference and machine learning. Under the assumption that $U$ is continuously differentiable, $\nabla U$ is globally Lipschitz and $U$ is strongly convex, we obtain non-asymptotic bounds for the convergence to stationarity in Wasserstein distance of order $2$ and total variation distance of the sampling method based on the Euler discretization of the Langevin stochastic differential equation, for both constant and decreasing step sizes. The dependence on the dimension of the state space of these bounds is explicit. The convergence of an appropriately weighted empirical measure is also investigated and bounds for the mean square error and exponential deviation inequality are reported for functions which are measurable and bounded. An illustration to Bayesian inference for binary regression is presented to support our claims.

Citation

Download Citation

Alain Durmus. Éric Moulines. "High-dimensional Bayesian inference via the unadjusted Langevin algorithm." Bernoulli 25 (4A) 2854 - 2882, November 2019. https://doi.org/10.3150/18-BEJ1073

Information

Received: 1 July 2017; Revised: 1 July 2018; Published: November 2019
First available in Project Euclid: 13 September 2019

zbMATH: 07110114
MathSciNet: MR4003567
Digital Object Identifier: 10.3150/18-BEJ1073

Keywords: Langevin diffusion , Markov chain Monte Carlo , Metropolis adjusted Langevin algorithm , rate of convergence , total variation distance

Rights: Copyright © 2019 Bernoulli Society for Mathematical Statistics and Probability

Vol.25 • No. 4A • November 2019
Back to Top