Registered users receive a variety of benefits including the ability to customize email alerts, create favorite journals list, and save searches.
Please note that a Project Euclid web account does not automatically grant access to full-text content. An institutional or society member subscription is required to view non-Open Access content.
Contact email@example.com with any questions.
The advance publication content published here for the Statistical Science is in its final form; it has been reviewed, corrected, edited, typeset, and assigned a permanent digital object identifier (DOI). The article's pagination will be updated when the article is assigned to a volume and issue.
Advance publication content can be cited using the date of online publication and the DOI.
VIEW ALL ABSTRACTS+
This will count as one of your downloads.
You will have access to both the presentation and article (if available).
This paper takes the reader on a journey through the history of Bayesian computation, from the 18th century to the present day. Beginning with the one-dimensional integral first confronted by Bayes in 1763, we highlight the key contributions of: Laplace, Metropolis (and, importantly, his coauthors), Hammersley and Handscomb, and Hastings, all of which set the foundations for the computational revolution in the late 20th century—led, primarily, by Markov chain Monte Carlo (MCMC) algorithms. A very short outline of 21st century computational methods—including pseudo-marginal MCMC, Hamiltonian Monte Carlo, sequential Monte Carlo and the various “approximate” methods—completes the paper.
Given a sequence of random observations, a Bayesian forecaster aims to predict based on for each . To this end, in principle, she only needs to select a collection , called “strategy” in what follows, where is the marginal distribution of and the nth predictive distribution. Because of the Ionescu–Tulcea theorem, σ can be assigned directly, without passing through the usual prior/posterior scheme. One main advantage is that no prior probability is to be selected. In a nutshell, this is the predictive approach to Bayesian learning. A concise review of the latter is provided in this paper. We try to put such an approach in the right framework, to make clear a few misunderstandings, and to provide a unifying view. Some recent results are discussed as well. In addition, some new strategies are introduced and the corresponding distribution of the data sequence X is determined. The strategies concern generalized Pólya urns, random change points, covariates and stationary sequences.
The 21st century has seen an enormous growth in the development and use of approximate Bayesian methods. Such methods produce computational solutions to certain “intractable” statistical problems that challenge exact methods like Markov chain Monte Carlo: for instance, models with unavailable likelihoods, high-dimensional models and models featuring large data sets. These approximate methods are the subject of this review. The aim is to help new researchers in particular—and more generally those interested in adopting a Bayesian approach to empirical work—distinguish between different approximate techniques, understand the sense in which they are approximate, appreciate when and why particular methods are useful and see the ways in which they can can be combined.
Stephen M. Stigler received his Ph.D. in Statistics from the University of California, Berkeley, with a dissertation on the asymptotic distribution of linear functions of order statistics. Starting in 1967, he taught at the University of Wisconsin, Madison, then in 1979 moved to the University of Chicago where he taught from 1979 to 2021. Stigler has worked on a variety of topics in mathematical statistics, ranging from asymptotic theory to the theory of experimental design, and on applications of statistics including in anthropology, forensic science, paleontology, psychology, information transfer and sports. In recent years, he has concentrated on the history of statistics, with inquiries ranging from the development of statistical methods in astronomy and geodesy and their spread to biological and social sciences, to lotteries, to the modern development of statistical theory. He has published four books, The History of Statistics (1986), Statistics on the Table (1999), The Seven Pillars of Statistical Wisdom (2016) and Casanova’s Lottery (2022). A recent research focus has been upon the way the work of Francis Galton on the statistics of inheritance led to the creation of modern multivariate analysis and made a true Bayesian inference possible, and on how R. A. Fisher’s transformation of Karl Pearson’s path breaking research led to a modern period of statistical enlightenment.
Stigler is an elected member of the American Academy of Arts and Sciences and of the American Philosophical Society; he has served as President of the Institute of Mathematical Statistics and of the International Statistical Institute. In 2005, he received the Humboldt Foundation Research Award; in 2010, he was elected Membre Associé of the Académie royale de Belgique, Classe des Sciences. Stigler served as Theory and Methods Editor for the Journal of the American Statistical Association 1979–1982. He was a Guggenheim Fellow in 1977, and received awards for undergraduate teaching at the University of Wisconsin (1971) and University of Chicago (1998).
This interview with Stigler was conducted remotely in July 2021.