The Annals of Applied Probability

On the stability of some controlled Markov chains and its applications to stochastic approximation with Markovian dynamic

Christophe Andrieu, Vladislav B. Tadić, and Matti Vihola

Full-text: Open access


We develop a practical approach to establish the stability, that is, the recurrence in a given set, of a large class of controlled Markov chains. These processes arise in various areas of applied science and encompass important numerical methods. We show in particular how individual Lyapunov functions and associated drift conditions for the parametrized family of Markov transition probabilities and the parameter update can be combined to form Lyapunov functions for the joint process, leading to the proof of the desired stability property. Of particular interest is the fact that the approach applies even in situations where the two components of the process present a time-scale separation, which is a crucial feature of practical situations. We then move on to show how such a recurrence property can be used in the context of stochastic approximation in order to prove the convergence of the parameter sequence, including in the situation where the so-called stepsize is adaptively tuned. We finally show that the results apply to various algorithms of interest in computational statistics and cognate areas.

Article information

Ann. Appl. Probab., Volume 25, Number 1 (2015), 1-45.

First available in Project Euclid: 16 December 2014

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 65C05: Monte Carlo methods
Secondary: 60J22: Computational methods in Markov chains [See also 65C40] 60J05: Discrete-time Markov processes on general state spaces

Stability Markov chains stochastic approximation controlled Markov chains adaptive Markov chain Monte Carlo


Andrieu, Christophe; Tadić, Vladislav B.; Vihola, Matti. On the stability of some controlled Markov chains and its applications to stochastic approximation with Markovian dynamic. Ann. Appl. Probab. 25 (2015), no. 1, 1--45. doi:10.1214/13-AAP953.

Export citation


  • [1] Andrieu, C. and Moulines, É. (2006). On the ergodicity properties of some adaptive MCMC algorithms. Ann. Appl. Probab. 16 1462–1505.
  • [2] Andrieu, C., Moulines, É. and Priouret, P. (2005). Stability of stochastic approximation under verifiable conditions. SIAM J. Control Optim. 44 283–312.
  • [3] Andrieu, C. and Robert, C. P. (2001). Controlled MCMC for optimal sampling. Technical Report 0125, Cahiers de Mathématiques du Ceremade, Univ. Paris-Dauphine.
  • [4] Andrieu, C. and Vihola, M. (2014). Markovian stochastic approximation with expanding projections. Bernoulli 20 545–585.
  • [5] Atchadé, Y. and Fort, G. (2010). Limit theorems for some adaptive MCMC algorithms with subgeometric kernels. Bernoulli 16 116–154.
  • [6] Atchadé, Y. F. and Rosenthal, J. S. (2005). On adaptive Markov chain Monte Carlo algorithms. Bernoulli 11 815–828.
  • [7] Benveniste, A., Métivier, M. and Priouret, P. (1990). Adaptive Algorithms and Stochastic Approximations. Applications of Mathematics (New York) 22. Springer, Berlin. Translated from the French by Stephen S. Wilson.
  • [8] Delyon, B. and Juditsky, A. (1993). Accelerated stochastic approximation. SIAM J. Optim. 3 868–881.
  • [9] Gelman, A., Roberts, G. O. and Gilks, W. R. (1996). Efficient Metropolis jumping rules. In Bayesian Statistics, 5 (Alicante, 1994) 599–607. Oxford Univ. Press, New York.
  • [10] Haario, H., Saksman, E. and Tamminen, J. (1999). Adaptive proposal distribution for random walk Metropolis algorithm. Comput. Statist. 14 375–395.
  • [11] Haario, H., Saksman, E. and Tamminen, J. (2001). An adaptive Metropolis algorithm. Bernoulli 7 223–242.
  • [12] Jarner, S. F. and Hansen, E. (2000). Geometric ergodicity of Metropolis algorithms. Stochastic Process. Appl. 85 341–361.
  • [13] Meyn, S. P. and Tweedie, R. L. (1993). Markov Chains and Stochastic Stability. Springer, London.
  • [14] Saksman, E. and Vihola, M. (2010). On the ergodicity of the adaptive Metropolis algorithm on unbounded domains. Ann. Appl. Probab. 20 2178–2203.
  • [15] Vihola, M. (2011). On the stability and ergodicity of adaptive scaling Metropolis algorithms. Stochastic Process. Appl. 121 2839–2860.
  • [16] Younes, L. (1999). On the convergence of Markovian stochastic algorithms with rapidly decreasing ergodicity rates. Stochastics Stochastics Rep. 65 177–228.