Open Access
May 2006 Optimal scaling for partially updating MCMC algorithms
Peter Neal, Gareth Roberts
Ann. Appl. Probab. 16(2): 475-515 (May 2006). DOI: 10.1214/105051605000000791

Abstract

In this paper we shall consider optimal scaling problems for high-dimensional Metropolis–Hastings algorithms where updates can be chosen to be lower dimensional than the target density itself. We find that the optimal scaling rule for the Metropolis algorithm, which tunes the overall algorithm acceptance rate to be 0.234, holds for the so-called Metropolis-within-Gibbs algorithm as well. Furthermore, the optimal efficiency obtainable is independent of the dimensionality of the update rule. This has important implications for the MCMC practitioner since high-dimensional updates are generally computationally more demanding, so that lower-dimensional updates are therefore to be preferred. Similar results with rather different conclusions are given for so-called Langevin updates. In this case, it is found that high-dimensional updates are frequently most efficient, even taking into account computing costs.

Citation

Download Citation

Peter Neal. Gareth Roberts. "Optimal scaling for partially updating MCMC algorithms." Ann. Appl. Probab. 16 (2) 475 - 515, May 2006. https://doi.org/10.1214/105051605000000791

Information

Published: May 2006
First available in Project Euclid: 29 June 2006

zbMATH: 1127.60021
MathSciNet: MR2244423
Digital Object Identifier: 10.1214/105051605000000791

Subjects:
Primary: 60F05
Secondary: 65C05

Keywords: Langevin algorithm , Markov chain Monte Carlo , Metropolis algorithm , Optimal scaling , weak convergence

Rights: Copyright © 2006 Institute of Mathematical Statistics

Vol.16 • No. 2 • May 2006
Back to Top