- Statist. Sci.
- Volume 18, Issue 1 (2003), 1-32.
Could Fisher, Jeffreys and Neyman Have Agreed on Testing?
Ronald Fisher advocated testing using p-values, Harold Jeffreys proposed use of objective posterior probabilities of hypotheses and Jerzy Neyman recommended testing with fixed error probabilities. Each was quite critical of the other approaches. Most troubling for statistics and science is that the three approaches can lead to quite different practical conclusions.
This article focuses on discussion of the conditional frequentist approach to testing, which is argued to provide the basis for a methodological unification of the approaches of Fisher, Jeffreys and Neyman. The idea is to follow Fisher in using p-values to define the "strength of evidence" in data and to follow his approach of conditioning on strength of evidence; then follow Neyman by computing Type I and Type II error probabilities, but do so conditional on the strength of evidence in the data. The resulting conditional frequentist error probabilities equal the objective posterior probabilities of the hypotheses advocated by Jeffreys.
Statist. Sci., Volume 18, Issue 1 (2003), 1-32.
First available in Project Euclid: 23 June 2003
Permanent link to this document
Digital Object Identifier
Mathematical Reviews number (MathSciNet)
Zentralblatt MATH identifier
Berger, James O. Could Fisher, Jeffreys and Neyman Have Agreed on Testing?. Statist. Sci. 18 (2003), no. 1, 1--32. doi:10.1214/ss/1056397485. https://projecteuclid.org/euclid.ss/1056397485
- Includes: Ronald Christensen. Comment.
- Includes: Wesley O. Johnson. Comment.
- Includes: Michael Lavine. Comment.
- Includes: Subhash R. Lele. Comment.
- Includes: Deborah G. Mayo. Comment.
- Includes: Luis R. Pericchi. Comment.
- Includes: N. Reid. Comment.
- Includes: James O. Berger. Rejoinder.