Brazilian Journal of Probability and Statistics

Boosting, downsizing and optimality of test functions of Markov chains

Thomas R. Boucher

Full-text: Access denied (no subscription detected)

We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text

Abstract

Test functions play an important role in Markov chain theory. Stability of a Markov chain can be demonstrated by constructing a test function of the chain that satisfies a stochastic drift criterion. The test function defines a class of functions of the process for which limit laws hold, yields bounds on the convergence of the Markov chain transition probabilities to the stationary distribution, and provides information concerning the mixing properties of the chain. Under certain conditions, these results can be improved by using a new test function derived from a known test function of a Markov chain.

Article information

Source
Braz. J. Probab. Stat., Volume 31, Number 3 (2017), 640-652.

Dates
Received: September 2015
Accepted: June 2016
First available in Project Euclid: 22 August 2017

Permanent link to this document
https://projecteuclid.org/euclid.bjps/1503388832

Digital Object Identifier
doi:10.1214/16-BJPS327

Mathematical Reviews number (MathSciNet)
MR3693984

Zentralblatt MATH identifier
1378.60097

Keywords
Markov chain convergence ergodicity mixing test function

Citation

Boucher, Thomas R. Boosting, downsizing and optimality of test functions of Markov chains. Braz. J. Probab. Stat. 31 (2017), no. 3, 640--652. doi:10.1214/16-BJPS327. https://projecteuclid.org/euclid.bjps/1503388832


Export citation

References

  • Borovkov, A. A. and Hordijk, A. (2004). Characterization and sufficient conditions for normed ergodicity of Markov chains. Adv. in Appl. Probab. 36, 227–242.
  • Boucher, T. R. and Cline, D. B. H. (2007). Stability of cyclic threshold autoregressive time series models. Statist. Sinica 17, 43–62.
  • Chan, K. S., Petruccelli, J. D., Tong, H. and Woolford, S. W. (1985). A multiple-threshold $\operatorname{AR}(1)$ model. J. Appl. Probab. 22, 267–279.
  • Cline, D. B. H. and Pu, H. (2001). Stability of nonlinear time series: What does noise have to do with it? In Selected Proceedings of the Symposium on Inference for Stochastic Processes, Vol. 37, 151–170.
  • Mengersen, K. L. and Tweedie, R. L. (1996). Rates of convergence of the Hastings and Metropolis algorithms. Ann. Statist. 24, 101–121.
  • Meyn, S. P. and Tweedie, R. L. (1993). Markov Chains and Stochastic Stability. London: Springer.
  • Petruccelli, J. D. and Woolford, S. W. (1984). A threshold $\operatorname{AR}(1)$ model. J. Appl. Probab. 21, 270–286.
  • Roberts, G. O. and Rosenthal, J. S. (2004). General state space Markov chains and MCMC algorithms. Probab. Surv. 1, 20–71.
  • Rosenthal, J. S. (2003). Asymptotic variance and convergence rates of nearly-periodic MCMC algorithms. J. Amer. Statist. Assoc. 98, 169–177.