Open Access
December, 1994 Markov Chains for Exploring Posterior Distributions
Luke Tierney
Ann. Statist. 22(4): 1701-1728 (December, 1994). DOI: 10.1214/aos/1176325750


Several Markov chain methods are available for sampling from a posterior distribution. Two important examples are the Gibbs sampler and the Metropolis algorithm. In addition, several strategies are available for constructing hybrid algorithms. This paper outlines some of the basic methods and strategies and discusses some related theoretical and practical issues. On the theoretical side, results from the theory of general state space Markov chains can be used to obtain convergence rates, laws of large numbers and central limit theorems for estimates obtained from Markov chain methods. These theoretical results can be used to guide the construction of more efficient algorithms. For the practical use of Markov chain methods, standard simulation methodology provides several variance reduction techniques and also give guidance on the choice of sample size and allocation.


Download Citation

Luke Tierney. "Markov Chains for Exploring Posterior Distributions." Ann. Statist. 22 (4) 1701 - 1728, December, 1994.


Published: December, 1994
First available in Project Euclid: 11 April 2007

zbMATH: 0829.62080
MathSciNet: MR1329166
Digital Object Identifier: 10.1214/aos/1176325750

Primary: 60J05
Secondary: 65C05

Keywords: 62-04 , Gibbs sampler , Metropolis-Hastings algorithm , Monte Carlo , variance reduction

Rights: Copyright © 1994 Institute of Mathematical Statistics

Vol.22 • No. 4 • December, 1994
Back to Top