Open Access
June 2017 Nonasymptotic convergence analysis for the unadjusted Langevin algorithm
Alain Durmus, Éric Moulines
Ann. Appl. Probab. 27(3): 1551-1587 (June 2017). DOI: 10.1214/16-AAP1238

Abstract

In this paper, we study a method to sample from a target distribution π over Rd having a positive density with respect to the Lebesgue measure, known up to a normalisation factor. This method is based on the Euler discretization of the overdamped Langevin stochastic differential equation associated with π. For both constant and decreasing step sizes in the Euler discretization, we obtain nonasymptotic bounds for the convergence to the target distribution π in total variation distance. A particular attention is paid to the dependency on the dimension d, to demonstrate the applicability of this method in the high-dimensional setting. These bounds improve and extend the results of Dalalyan [J. R. Stat. Soc. Ser. B. Stat. Methodol. (2017) 79 651–676].

Citation

Download Citation

Alain Durmus. Éric Moulines. "Nonasymptotic convergence analysis for the unadjusted Langevin algorithm." Ann. Appl. Probab. 27 (3) 1551 - 1587, June 2017. https://doi.org/10.1214/16-AAP1238

Information

Received: 1 March 2016; Revised: 1 August 2016; Published: June 2017
First available in Project Euclid: 19 July 2017

zbMATH: 1377.65007
MathSciNet: MR3678479
Digital Object Identifier: 10.1214/16-AAP1238

Subjects:
Primary: 60F05 , 62L10 , 65C05
Secondary: 60J05 , 65C40 , 93E35

Keywords: Langevin diffusion , Markov chain Monte Carlo , Metropolis adjusted Langevin algorithm , rate of convergence , total variation distance

Rights: Copyright © 2017 Institute of Mathematical Statistics

Vol.27 • No. 3 • June 2017
Back to Top