The Annals of Applied Probability

Minimising MCMC variance via diffusion limits, with an application to simulated tempering

Gareth O. Roberts and Jeffrey S. Rosenthal

Full-text: Open access

Abstract

We derive new results comparing the asymptotic variance of diffusions by writing them as appropriate limits of discrete-time birth–death chains which themselves satisfy Peskun orderings. We then apply our results to simulated tempering algorithms to establish which choice of inverse temperatures minimises the asymptotic variance of all functionals and thus leads to the most efficient MCMC algorithm.

Article information

Source
Ann. Appl. Probab., Volume 24, Number 1 (2014), 131-149.

Dates
First available in Project Euclid: 9 January 2014

Permanent link to this document
https://projecteuclid.org/euclid.aoap/1389278722

Digital Object Identifier
doi:10.1214/12-AAP918

Mathematical Reviews number (MathSciNet)
MR3161644

Zentralblatt MATH identifier
1298.60078

Subjects
Primary: 60J22: Computational methods in Markov chains [See also 65C40]
Secondary: 62M05: Markov processes: estimation 62F10: Point estimation

Keywords
Markov chain Monte Carlo simulated tempering optimal scaling diffusion limits

Citation

Roberts, Gareth O.; Rosenthal, Jeffrey S. Minimising MCMC variance via diffusion limits, with an application to simulated tempering. Ann. Appl. Probab. 24 (2014), no. 1, 131--149. doi:10.1214/12-AAP918. https://projecteuclid.org/euclid.aoap/1389278722


Export citation

References

  • [1] Atchadé, Y. F., Roberts, G. O. and Rosenthal, J. S. (2011). Towards optimal scaling of Metropolis-coupled Markov chain Monte Carlo. Stat. Comput. 21 555–568.
  • [2] Bédard, M. (2007). Weak convergence of Metropolis algorithms for non-i.i.d. target distributions. Ann. Appl. Probab. 17 1222–1244.
  • [3] Bédard, M. (2008). Optimal acceptance rates for Metropolis algorithms: Moving beyond 0.234. Stochastic Process. Appl. 118 2198–2222.
  • [4] Bédard, M. and Rosenthal, J. S. (2008). Optimal scaling of Metropolis algorithms: Heading toward general target distributions. Canad. J. Statist. 36 483–503.
  • [5] Beskos, A., Roberts, G. and Stuart, A. (2009). Optimal scalings for local Metropolis–Hastings chains on nonproduct targets in high dimensions. Ann. Appl. Probab. 19 863–898.
  • [6] Bhattacharya, R. N. and Waymire, E. C. (1990). Stochastic Processes with Applications. Wiley, New York.
  • [7] Brooks, S., Gelman, A., Jones, G. L. and Meng, X.-L., eds. (2011). Handbook of Markov Chain Monte Carlo. Chapman & Hall/CRC, Boca Raton, FL.
  • [8] Ethier, S. N. and Kurtz, T. G. (1986). Markov Processes: Characterization and convergence. Wiley, New York.
  • [9] Fort, G., Moulines, E. and Priouret, P. (2011). Convergence of adaptive and interacting Markov chain Monte Carlo algorithms. Ann. Statist. 39 3262–3289.
  • [10] Geyer, C. (1992). Practical Markov chain Monte Carlo. Statist. Sci. 7 473–483.
  • [11] Kofke, D. A. (2002). On the acceptance probability of replica-exchange Monte Carlo trials. J. Chem. Phys. 117 6911. Erratum: J. Chem. Phys. 120 10852.
  • [12] Leisen, F. and Mira, A. (2008). An extension of Peskun and Tierney orderings to continuous time Markov chains. Statist. Sinica 18 1641–1651.
  • [13] Liu, J. S., Wong, W. H. and Kong, A. (1994). Covariance structure of the Gibbs sampler with applications to the comparisons of estimators and augmentation schemes. Biometrika 81 27–40.
  • [14] Marinari, E. and Parisi, G. (1992). Simulated tempering: A new Monte Carlo scheme. Europhys. Lett. 19 451–458.
  • [15] Mira, A. (2001). Ordering and improving the performance of Monte Carlo Markov chains. Statist. Sci. 16 340–350.
  • [16] Mira, A. and Geyer, C. J. (2000). On non-reversible Markov chains. In Monte Carlo Methods (Toronto, ON, 1998). Fields Institute Communications 26 95–110. Amer. Math. Soc., Providence, RI.
  • [17] Mira, A. and Leisen, F. (2009). Covariance ordering for discrete and continuous time Markov chains. Statist. Sinica 19 651–666.
  • [18] Peskun, P. H. (1973). Optimum Monte-Carlo sampling using Markov chains. Biometrika 60 607–612.
  • [19] Predescu, C., Predescu, M. and Ciobanu, C. V. (2004). The incomplete beta function law for parallel tempering sampling of classical canonical systems. J. Chem. Phys. 120 4119–4128.
  • [20] Roberts, G. O., Gelman, A. and Gilks, W. R. (1997). Weak convergence and optimal scaling of random walk Metropolis algorithms. Ann. Appl. Probab. 7 110–120.
  • [21] Roberts, G. O. and Rosenthal, J. S. (1998). Optimal scaling of discrete approximations to Langevin diffusions. J. R. Stat. Soc. Ser. B Stat. Methodol. 60 255–268.
  • [22] Roberts, G. O. and Rosenthal, J. S. (2001). Optimal scaling for various Metropolis–Hastings algorithms. Statist. Sci. 16 351–367.
  • [23] Sinclair, A. (1992). Improved bounds for mixing rates of Markov chains and multicommodity flow. Combin. Probab. Comput. 1 351–370.
  • [24] Tierney, L. (1994). Markov chains for exploring posterior distributions. Ann. Statist. 22 1701–1762.
  • [25] Tierney, L. (1998). A note on Metropolis–Hastings kernels for general state spaces. Ann. Appl. Probab. 8 1–9.
  • [26] Ward, A. R. and Glynn, P. W. (2003). A diffusion approximation for a Markovian queue with reneging. Queueing Syst. 43 103–128.