Electronic Journal of Statistics

Monte Carlo methods for improper target distributions

Krishna B. Athreya and Vivekananda Roy

Full-text: Open access


Monte Carlo methods (based on iid sampling or Markov chains) for estimating integrals with respect to a proper target distribution (that is, a probability distribution) are well known in the statistics literature. If the target distribution $\pi$ happens to be improper then it is shown here that the standard time average estimator based on Markov chains with $\pi$ as its stationary distribution will converge to zero with probability 1, and hence is not appropriate. In this paper, we present some limit theorems for regenerative sequences and use these to develop some algorithms to produce strongly consistent estimators (called regeneration and ratio estimators) that work whether $\pi$ is proper or improper. These methods may be referred to as regenerative sequence Monte Carlo (RSMC) methods. The regenerative sequences include Markov chains as a special case. We also present an algorithm that uses the domination of the given target $\pi$ by a probability distribution $\pi_{0}$. Examples are given to illustrate the use and limitations of our algorithms.

Article information

Electron. J. Statist., Volume 8, Number 2 (2014), 2664-2692.

First available in Project Euclid: 22 December 2014

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 65C05: Monte Carlo methods 62M99: None of the above, but in this section
Secondary: 60G50: Sums of independent random variables; random walks

Importance sampling improper posterior Markov chains MCMC null recurrence ratio limit theorem regenerative sequence


Athreya, Krishna B.; Roy, Vivekananda. Monte Carlo methods for improper target distributions. Electron. J. Statist. 8 (2014), no. 2, 2664--2692. doi:10.1214/14-EJS969. https://projecteuclid.org/euclid.ejs/1419258190

Export citation


  • [1] Athreya, K. B. (1986). Darling and Kac revisited., Sankhyā, 48 255–266.
  • [2] Athreya, K. B. and Lahiri, S. N. (2006)., Measure Theory and Probability Theory. Springer, New York.
  • [3] Athreya, K. B. and Ney, P. (1978). A new approach to the limit theory of recurrent Markov chains., Transactions of the American Mathematical Society, 245 493–501.
  • [4] Breiman, L. (1992)., Probability. SIAM, Philadelphia.
  • [5] Casella, G. and George, E. (1992). Explaining the Gibbs sampler., The American Statistician, 46 167–174.
  • [6] Chung, K. L. (1968)., A Course in Probability Theory. Harcourt, Brace & World, Inc., New York.
  • [7] Durrett, R. (2010)., Probability: Theory and Examples. 4th ed. Cambridge University Press, New York.
  • [8] Feller, W. (1971)., An Introduction to Probability Theory and Its Applications, vol. II. John Wiley & Sons, New York.
  • [9] Gärtner, J. and Sun, R. (2009). A quenched limit theorem for the local time of random walks on $\mathbbZ^2$., Stochastic Processes and Their Applications, 119 1198–1215.
  • [10] Hastings, W. K. (1970). Monte Carlo sampling methods using Markov chains and their applications., Biometrika, 57 97–109.
  • [11] Hobert, J. P. (2001). Stability relationships among the Gibbs sampler and its subchains., Journal of Computational and Graphical Statistics, 10 185–205.
  • [12] Hobert, J. P. and Casella, G. (1996). The effect of improper priors on Gibbs sampling in hierarchical linear mixed models., Journal of the American Statistical Association, 91 1461–1473.
  • [13] Hobert, J. P. and Casella, G. (1998). Functional compatibility, Markov chains, and Gibbs sampling with improper posteriors., Journal of Computational and Graphical Statistics, 7 42–60.
  • [14] Karlsen, H. A. and Tjøstheim, D. (2001). Nonparametric estimation in null recurrent time series., The Annals of Statistics, 29 372–416.
  • [15] Metropolis, N., Rosenbluth, A., Rosenbluth, M. N., Teller, A. H. and Teller, E. (1953). Equations of state calculations by fast computing machines., Journal of Chemical Physics, 21 1087–1092.
  • [16] Meyn, S. P. and Tweedie, R. L. (1993)., Markov Chains and Stochastic Stability. Springer Verlag, London.
  • [17] Mykland, P., Tierney, L. and Yu, B. (1995). Regeneration in Markov chain samplers., Journal of the American Statistical Association, 90 233–41.
  • [18] Numelin, E. (1978). A splitting technique for Harris recurrent Markov chains., Z. Wahrsch. Verw. Gebiete, 43 309–318.
  • [19] Peköz, E. and Röllin, A. (2011). Exponential approximation for the nearly critical Galton-Watson process and occupation times of Markov chains., Electronic Journal of Probability, 16 1381–1393.
  • [20] R Development Core Team (2011)., R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, URL http://www.R-project.org.
  • [21] Robert, C. and Casella, G. (2004)., Monte Carlo Statistical Methods. 2nd ed. Springer, New York.
  • [22] Roberts, G. O., Sahu, S. K. and Wilks, W. R. (1995). Comment of “Bayesian computation and stochastic systems” by J. Besag, P. Green, D. Higdon and K. Mengersen., Statistical Science, 10 49–51.
  • [23] Vineyard, G. H. (1963). The number of distinct sites visited in a random walk on a lattice., Journal of Mathematical Physics, 4 1191–1193.