The Annals of Statistics

Markov Chains for Exploring Posterior Distributions

Luke Tierney

Full-text: Open access

Abstract

Several Markov chain methods are available for sampling from a posterior distribution. Two important examples are the Gibbs sampler and the Metropolis algorithm. In addition, several strategies are available for constructing hybrid algorithms. This paper outlines some of the basic methods and strategies and discusses some related theoretical and practical issues. On the theoretical side, results from the theory of general state space Markov chains can be used to obtain convergence rates, laws of large numbers and central limit theorems for estimates obtained from Markov chain methods. These theoretical results can be used to guide the construction of more efficient algorithms. For the practical use of Markov chain methods, standard simulation methodology provides several variance reduction techniques and also give guidance on the choice of sample size and allocation.

Article information

Source
Ann. Statist. Volume 22, Number 4 (1994), 1701-1728.

Dates
First available: 11 April 2007

Permanent link to this document
http://projecteuclid.org/euclid.aos/1176325750

JSTOR
links.jstor.org

Digital Object Identifier
doi:10.1214/aos/1176325750

Mathematical Reviews number (MathSciNet)
MR1329166

Zentralblatt MATH identifier
0829.62080

Subjects
Primary: 60J05: Discrete-time Markov processes on general state spaces
Secondary: 65C05: Monte Carlo methods

Keywords
62-04 Monte Carlo Metropolis-Hastings algorithm Gibbs sampler variance reduction

Citation

Tierney, Luke. Markov Chains for Exploring Posterior Distributions. The Annals of Statistics 22 (1994), no. 4, 1701--1728. doi:10.1214/aos/1176325750. http://projecteuclid.org/euclid.aos/1176325750.


Export citation