Bayesian statistical methodology has become highly popular in a myriad of applications over the past several decades. In Bayesian statistics, it is often required to draw samples from intractable probability distributions. Markov chain Monte Carlo (MCMC) algorithms are common methods of obtaining samples from these distributions. When an MCMC algorithm is used, it is important to be able to obtain an answer to the question of how many iterations the chain must run before it is “close enough” to its target distribution to allow approximate sampling from this distribution. Several methods of approaching this question exist in the literature. Some rely on the output of the chain, and some are based on Markov chain theory. These techniques suffer from major practical limitations. This work provides a computational method of bounding the mixing time of a Metropolis–Hastings algorithm. This approach extends the work of Spade (Statistics and Computing 26 (2016) 761–781) and Spade (Markov Processes and Related Fields 26 (2020) 487–516) to general versions of the Metropolis–Hastings algorithm, while examining the convergence behavior of such samplers under symmetric and asymmetric proposal densities.
"A Monte Carlo integration approach to estimating drift and minorization coefficients for Metropolis–Hastings samplers." Braz. J. Probab. Stat. 35 (3) 466 - 483, August 2021. https://doi.org/10.1214/20-BJPS486