Annals of Applied Probability
- Ann. Appl. Probab.
- Volume 27, Number 3 (2017), 1551-1587.
Nonasymptotic convergence analysis for the unadjusted Langevin algorithm
Alain Durmus and Éric Moulines
Abstract
In this paper, we study a method to sample from a target distribution $\pi$ over $\mathbb{R}^{d}$ having a positive density with respect to the Lebesgue measure, known up to a normalisation factor. This method is based on the Euler discretization of the overdamped Langevin stochastic differential equation associated with $\pi$. For both constant and decreasing step sizes in the Euler discretization, we obtain nonasymptotic bounds for the convergence to the target distribution $\pi$ in total variation distance. A particular attention is paid to the dependency on the dimension $d$, to demonstrate the applicability of this method in the high-dimensional setting. These bounds improve and extend the results of Dalalyan [J. R. Stat. Soc. Ser. B. Stat. Methodol. (2017) 79 651–676].
Article information
Source
Ann. Appl. Probab., Volume 27, Number 3 (2017), 1551-1587.
Dates
Received: March 2016
Revised: August 2016
First available in Project Euclid: 19 July 2017
Permanent link to this document
https://projecteuclid.org/euclid.aoap/1500451235
Digital Object Identifier
doi:10.1214/16-AAP1238
Mathematical Reviews number (MathSciNet)
MR3678479
Zentralblatt MATH identifier
1377.65007
Subjects
Primary: 65C05: Monte Carlo methods 60F05: Central limit and other weak theorems 62L10: Sequential analysis
Secondary: 65C40: Computational Markov chains 60J05: Discrete-time Markov processes on general state spaces 93E35: Stochastic learning and adaptive control
Keywords
Total variation distance Langevin diffusion Markov Chain Monte Carlo Metropolis adjusted Langevin algorithm rate of convergence
Citation
Durmus, Alain; Moulines, Éric. Nonasymptotic convergence analysis for the unadjusted Langevin algorithm. Ann. Appl. Probab. 27 (2017), no. 3, 1551--1587. doi:10.1214/16-AAP1238. https://projecteuclid.org/euclid.aoap/1500451235