The Annals of Applied Probability

Mixing time estimation in reversible Markov chains from a single sample path

Daniel Hsu, Aryeh Kontorovich, David A. Levin, Yuval Peres, Csaba Szepesvári, and Geoffrey Wolfer

Full-text: Access denied (no subscription detected)

We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text


The spectral gap $\gamma_{\star}$ of a finite, ergodic and reversible Markov chain is an important parameter measuring the asymptotic rate of convergence. In applications, the transition matrix $\mathbf{{P}}$ may be unknown, yet one sample of the chain up to a fixed time $n$ may be observed. We consider here the problem of estimating $\gamma_{\star}$ from this data. Let $\boldsymbol{\pi}$ be the stationary distribution of $\mathbf{{P}}$, and $\pi_{\star}=\min_{x}\pi (x)$. We show that if $n$ is at least $\frac{1}{\gamma_{\star}\pi_{\star}}$ times a logarithmic correction, then $\gamma_{\star}$ can be estimated to within a multiplicative factor with high probability. When $\pi $ is uniform on $d$ states, this nearly matches a lower bound of $\frac{d}{\gamma_{\star}}$ steps required for precise estimation of $\gamma_{\star}$. Moreover, we provide the first procedure for computing a fully data-dependent interval, from a single finite-length trajectory of the chain, that traps the mixing time $t_{\mathrm{mix}}$ of the chain at a prescribed confidence level. The interval does not require the knowledge of any parameters of the chain. This stands in contrast to previous approaches, which either only provide point estimates, or require a reset mechanism, or additional prior knowledge. The interval is constructed around the relaxation time $t_{\mathrm{relax}}=1/\gamma_{\star}$, which is strongly related to the mixing time, and the width of the interval converges to zero roughly at a $1/\sqrt{n}$ rate, where $n$ is the length of the sample path.

Article information

Ann. Appl. Probab., Volume 29, Number 4 (2019), 2439-2480.

Received: November 2017
Revised: July 2018
First available in Project Euclid: 23 July 2019

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Primary: 60J10: Markov chains (discrete-time Markov processes on discrete state spaces) 62M05: Markov processes: estimation 62M99: None of the above, but in this section

Markov chains mixing time spectral gap empirical confidence interval


Hsu, Daniel; Kontorovich, Aryeh; Levin, David A.; Peres, Yuval; Szepesvári, Csaba; Wolfer, Geoffrey. Mixing time estimation in reversible Markov chains from a single sample path. Ann. Appl. Probab. 29 (2019), no. 4, 2439--2480. doi:10.1214/18-AAP1457.

Export citation


  • Atchadé, Y. F. (2016). Markov chain Monte Carlo confidence intervals. Bernoulli 22 1808–1838.
  • Audibert, J.-Y., Munos, R. and Szepesvári, C. (2009). Exploration-exploitation tradeoff using variance estimates in multi-armed bandits. Theoret. Comput. Sci. 410 1876–1902.
  • Batu, T., Fortnow, L., Rubinfeld, R., Smith, W. D. and White, P. (2000). Testing that distributions are close. In 41st Annual Symposium on Foundations of Computer Science (Redondo Beach, CA, 2000) 259–269. IEEE Comput. Soc. Press, Los Alamitos, CA.
  • Batu, T., Fortnow, L., Rubinfeld, R., Smith, W. D. and White, P. (2013). Testing closeness of discrete distributions. J. ACM 60 Art. 4, 25.
  • Benítez, J. and Liu, X. (2012). On the continuity of the group inverse. Oper. Matrices 6 859–868.
  • Bernstein, S. (1927). Sur l’extension du théoréme limite du calcul des probabilités aux sommes de quantités dépendantes. Math. Ann. 97 1–59.
  • Bhatnagar, N., Bogdanov, A. and Mossel, E. (2011). The computational complexity of estimating MCMC convergence time. In Approximation, Randomization, and Combinatorial Optimization. Lecture Notes in Computer Science 6845 424–435. Springer, Heidelberg.
  • Bhattacharya, B. B. and Valiant, G. (2015). Testing closeness with unequal sized samples. In Advances in Neural Information Processing Systems 28 (C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama and R. Garnett, eds.) 2611–2619. Curran Associates, Red Hook, NY.
  • Bousquet, O., Boucheron, S. and Lugosi, G. (2004). Introduction to statistical learning theory. Lecture Notes in Artificial Intelligence 3176 169–207.
  • Bradley, R. C. (2005). Basic properties of strong mixing conditions. A survey and some open questions. Probab. Surv. 2 107–144. Update of, and a supplement to, the 1986 original.
  • Cho, G. E. and Meyer, C. D. (2001). Comparison of perturbation bounds for the stationary distribution of a Markov chain. Linear Algebra Appl. 335 137–150.
  • Flegal, J. M. and Jones, G. L. (2011). Implementing MCMC: Estimating with confidence. In Handbook of Markov Chain Monte Carlo. Chapman & Hall/CRC Handb. Mod. Stat. Methods 175–197. CRC Press, Boca Raton, FL.
  • Freedman, D. A. (1975). On tail probabilities for martingales. Ann. Probab. 3 100–118.
  • Gamarnik, D. (2003). Extension of the PAC framework to finite and countable Markov chains. IEEE Trans. Inform. Theory 49 338–345.
  • Garren, S. T. and Smith, R. L. (2000). Estimating the second largest eigenvalue of a Markov transition matrix. Bernoulli 6 215–242.
  • Gillman, D. (1998). A Chernoff bound for random walks on expander graphs. SIAM J. Comput. 27 1203–1220.
  • Gyori, B. M. and Paulin, D. (2014). Non-asymptotic confidence intervals for MCMC in practice. Available at arXiv:1212.2016.
  • Haviv, M. and Van der Heyden, L. (1984). Perturbation bounds for the stationary probabilities of a finite Markov chain. Adv. in Appl. Probab. 16 804–818.
  • Hayashi, M. and Watanabe, S. (2016). Information geometry approach to parameter estimation in Markov chains. Ann. Statist. 44 1495–1535.
  • Hsu, D., Kontorovich, A. and Szepesvári, C. (2015). Mixing time estimation in reversible Markov chains from a single sample path. In Advances in Neural Information Processing Systems 28 (C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama and R. Garnett, eds.). Curran Associates, Red Hook, NY.
  • Jones, G. L. and Hobert, J. P. (2001). Honest exploration of intractable probability distributions via Markov chain Monte Carlo. Statist. Sci. 16 312–334.
  • Karandikar, R. L. and Vidyasagar, M. (2002). Rates of uniform convergence of empirical means with mixing processes. Statist. Probab. Lett. 58 297–307.
  • Kipnis, C. and Varadhan, S. R. S. (1986). Central limit theorem for additive functionals of reversible Markov processes and applications to simple exclusions. Comm. Math. Phys. 104 1–19.
  • Kirkland, S. J., Neumann, M. and Shader, B. L. (1998). Applications of Paz’s inequality to perturbation bounds for Markov chains. Linear Algebra Appl. 268 183–196.
  • Kontorovich, A. and Weiss, R. (2014). Uniform Chernoff and Dvoretzky–Kiefer–Wolfowitz-type inequalities for Markov chains and related processes. J. Appl. Probab. 51 1100–1113.
  • Kontoyiannis, I., Lastras-Montaño, L. A. and Meyn, S. P. (2006). Exponential bounds and stopping rules for MCMC and general Markov chains. In VALUETOOLS 45.
  • León, C. A. and Perron, F. (2004). Optimal Hoeffding bounds for discrete reversible Markov chains. Ann. Appl. Probab. 14 958–970.
  • Levin, D. A. and Peres, Y. (2016). Estimating the spectral gap of a reversible Markov chain from a short trajectory. Available at arXiv:1612.05330.
  • Levin, D. A., Peres, Y. and Wilmer, E. L. (2009). Markov Chains and Mixing Times. Amer. Math. Soc., Providence, RI.
  • Li, X. and Wei, Y. (2001). An improvement on the perturbation of the group inverse and oblique projection. Linear Algebra Appl. 338 53–66.
  • Liu, J. S. (2001). Monte Carlo Strategies in Scientific Computing. Springer Series in Statistics. Springer, New York.
  • McDonald, D. J., Shalizi, C. R. and Schervish, M. J. (2011). Estimating beta-mixing coefficients. In International Conference on Artificial Intelligence and Statistics 516–524.
  • Meyer, C. D. Jr. (1975). The role of the group generalized inverse in the theory of finite Markov chains. SIAM Rev. 17 443–464.
  • Meyn, S. P. and Tweedie, R. L. (1993). Markov Chains and Stochastic Stability. Communications and Control Engineering Series. Springer, London.
  • Mohri, M. and Rostamizadeh, A. (2008). Stability bounds for non-iid processes. In Advances in Neural Information Processing Systems 20.
  • Montenegro, R. and Tetali, P. (2006). Mathematical aspects of mixing times in Markov chains. Found. Trends Theor. Comput. Sci. 1 x+121.
  • Paulin, D. (2015). Concentration inequalities for Markov chains by Marton couplings and spectral methods. Electron. J. Probab. 20 no. 79, 32.
  • Seneta, E. (1993). Sensitivity of finite Markov chains under perturbation. Statist. Probab. Lett. 17 163–168.
  • Steinwart, I. and Christmann, A. (2009). Fast learning from non-i.i.d. observations. In Advances in Neural Information Processing Systems 22.
  • Steinwart, I., Hush, D. and Scovel, C. (2009). Learning from dependent observations. J. Multivariate Anal. 100 175–194.
  • Stewart, G. W. and Sun, J. G. (1990). Matrix Perturbation Theory. Computer Science and Scientific Computing. Academic Press, Boston, MA.
  • Sutton, R. S. and Barto, A. G. (1998). Reinforcement Learning: An Introduction (Adaptive Computation and Machine Learning). MIT Press, Cambridge, MA.
  • Tropp, J. A. (2015). An introduction to matrix concentration inequalities. Foundations and Trends® in Machine Learning 8 1–230.
  • Wolfer, G. and Kontorovich, A. (2019a). Estimating the mixing time of ergodic Markov chains. Available at arXiv:1902.01224.
  • Wolfer, G. and Kontorovich, A. (2019b). Minimax identity testing to ergodic Markov chains. Available at arXiv:1902.00080.
  • Yu, B. (1994). Rates of convergence for empirical processes of stationary mixing sequences. Ann. Probab. 22 94–116.