Bernoulli

  • Bernoulli
  • Volume 17, Number 3 (2011), 987-1014.

On nonlinear Markov chain Monte Carlo

Christophe Andrieu, Ajay Jasra, Arnaud Doucet, and Pierre Del Moral

Full-text: Open access

Abstract

Let $\mathscr{P}(E)$ be the space of probability measures on a measurable space $(E,\mathcal{E})$. In this paper we introduce a class of nonlinear Markov chain Monte Carlo (MCMC) methods for simulating from a probability measure $\pi\in\mathscr{P}(E)$. Nonlinear Markov kernels (see [Feynman–Kac Formulae: Genealogical and Interacting Particle Systems with Applications (2004) Springer]) $K\dvtx\mathscr{P}(E)\times E\rightarrow\mathscr{P}(E)$ can be constructed to, in some sense, improve over MCMC methods. However, such nonlinear kernels cannot be simulated exactly, so approximations of the nonlinear kernels are constructed using auxiliary or potentially self-interacting chains. Several nonlinear kernels are presented and it is demonstrated that, under some conditions, the associated approximations exhibit a strong law of large numbers; our proof technique is via the Poisson equation and Foster–Lyapunov conditions. We investigate the performance of our approximations with some simulations.

Article information

Source
Bernoulli, Volume 17, Number 3 (2011), 987-1014.

Dates
First available in Project Euclid: 7 July 2011

Permanent link to this document
https://projecteuclid.org/euclid.bj/1310042853

Digital Object Identifier
doi:10.3150/10-BEJ307

Mathematical Reviews number (MathSciNet)
MR2817614

Zentralblatt MATH identifier
1241.60037

Keywords
Foster–Lyapunov condition interacting Markov chains nonlinear Markov kernels Poisson equation

Citation

Andrieu, Christophe; Jasra, Ajay; Doucet, Arnaud; Del Moral, Pierre. On nonlinear Markov chain Monte Carlo. Bernoulli 17 (2011), no. 3, 987--1014. doi:10.3150/10-BEJ307. https://projecteuclid.org/euclid.bj/1310042853


Export citation

References

  • [1] Aaronson, J., Burton, R., Dehling, H., Gilhat, D., Hill, T. and Weiss, B. (1996). Strong laws for L- and U-statistics. Trans. Amer. Math. Soc. 348 2845–2866.
  • [2] Andrieu, C., Doucet, A. and Holenstein, R. (2010). Particle Markov chain Monte Carlo methods (with discussion). J. Roy. Statist. Soc. Ser. B 72 269–342.
  • [3] Andrieu, C., Jasra, A., Doucet, A. and Del Moral, P. (2008). Non-Linear Markov chain Monte Carlo. ESIAM Proc. 19 79–84.
  • [4] Andrieu, C., Jasra, A., Doucet, A. and Del Moral, P. (2008). On the convergence of the equi-energy sampler. Stoch. Anal. Appl. 26 298–312.
  • [5] Andrieu, C. and Moulines, É. (2006). On the ergodicity properties of some adaptive MCMC algorithms. Ann. Appl. Probab. 16 1462–1505.
  • [6] Atchadé, Y.F. (2009). Resampling from the past to improve MCMC algorithms. Far East. J. Theor. Stat. 27 81–99.
  • [7] Atchadé, Y.F. (2010). A cautionary tale on the efficiency of some adaptive Monte Carlo Schemes. Ann. Appl. Probab. 20 841–868.
  • [8] Atchadé, Y.F., Fort, G., Moulines, É. and Priouret, P. (2011). Adaptive Markov chain Monte Carlo: Theory and methods. In Inference and Learning in Dynamic Models (D. Barber, S. Chiappa and A.T. Cemgil, eds.). Cambridge: CUP. To appear.
  • [9] Brockwell, A.E., Del Moral, P. and Doucet, A. (2011). Sequentially interacting Markov chain Monte Carlo methods. Ann. Statist. 38 3387–3411.
  • [10] Del Moral, P. (2004). Feynman–Kac Formulae: Genealogical and Interacting Particle Systems with Applications. New York: Springer.
  • [11] Del Moral, P., Doucet, A. and Jasra, A. (2006). Sequential Monte Carlo samplers. J. Roy. Statist. Soc. Ser. B 68 411–436.
  • [12] Del Moral, P. and Miclo, L. (2004). On convergence of chains with occupational self-interactions. Proc. R. Soc. Lond. A. Math. Phys. Eng. Sci. 460 325–346.
  • [13] Doucet, A., De Freitas, J.F.G. and Gordon, N.J. (2001). Sequential Monte Carlo Methods in Practice. New York: Springer.
  • [14] Doukhan, P. (1994). Mixing: Properties and Examples. Lecture Notes in Statistics 85. Berlin: Springer.
  • [15] Fort, G. and Moulines, É. (2003). Polynomial ergodicity of Markov transition kernels. Stochastic Process. Appl. 103 57–99.
  • [16] Geyer, C. (1991). Markov chain maximum likelihood. In Computing Science and Statistics: The 23rd Symposium on the Interface (E. Keramigas, ed.) 156–163. Fairfax, VA: Interface Foundation.
  • [17] Glynn, P.W. and Meyn, S.P. (1996). A Lyapunov bound for solutions of the Poisson equation. Ann. Probab. 24 916–931.
  • [18] Grams, W.F. and Serfling, R.J. (1973). Convergence rates for U-statistics and related statistics. Ann. Statist. 1 153–160.
  • [19] Goldstein, S. (1979). Maximal coupling. Probab. Theory Related Fields 46 193–204.
  • [20] Haario, H., Saksman, E. and Tamminen, J. (2001). An adaptive Metropolis algorithm. Bernoulli 7 223–242.
  • [21] Jarner, S.F. and Hansen, E. (2000). Geometric ergodicity of Metropolis algorithms. Stochastic Process. Appl. 85 341–361.
  • [22] Jasra, A., Stephens, D.A. and Holmes, C.C. (2007). On population-based simulation for static inference. Statist. Comput. 17 263–279.
  • [23] Kou, S.C., Zhou, Q. and Wong, W.H. (2006). Equi-energy sampler with applications to statistical inference and statistical mechanics (with discussion). Ann. Statist. 34 1581–1619.
  • [24] Meyn, S.P. and Tweedie, R.L. (1994). Computable bounds for geometric convergence rates of Markov chains. Ann. Appl. Probab. 4 981–1011.
  • [25] Meyn, S.P. and Tweedie, R.L. (2009). Markov Chains and Stochastic Stability, 2nd ed. Cambridge: CUP.
  • [26] Robert, C.P. and Casella, G. (2004). Monte Carlo Statistical Methods. New York: Springer.
  • [27] Roberts, G.O. and Rosenthal, J.S. (1998). Two convergence properties of hybrid samplers. Ann. Appl. Probab. 8 397–407.
  • [28] Roberts, G.O. and Rosenthal, J.S. (2007). Coupling and ergodicity of adaptive MCMC. J. Appl. Probab. 44 458–475.
  • [29] Roberts, G.O. and Tweedie, R.L. (1996). Geometric convergence and central limit theorems for multidimensional Hastings and Metropolis algorithms. Biometrika 83 95–110.
  • [30] Shiryaev, A. (1996). Probability. New York: Springer.