Electronic Journal of Statistics

Almost sure hypothesis testing and a resolution of the Jeffreys-Lindley paradox

Michael Naaman

Full-text: Open access

Abstract

A new method of hypothesis testing is proposed ensuring that as the sample size grows, the probability of a type I error will become arbitrarily small by allowing the significance level to decrease with the number of observations in the study. Furthermore, the corresponding sequence of hypothesis tests will make only a finite number of errors with probability one, under mild regulatory conditions, including both i.i.d. and strongly mixing processes. It can be used as an alternative to arbitrary fixed significance levels such as $0.05$ or $0.01$.

This approach resolves the Jeffreys-Lindley paradox. It is also robust to multiple comparisons, and to a lesser extent, optional stopping making it somewhat robust to data fishing or p-hacking.

As an example of the practical applications of this technique, it is used as a lag order selection mechanism in simulations. It performs well relative to other model selection criteria. In another simulation, hypothesis tests and confidence intervals for the mean are investigated, demonstrating the improved performance even in small samples. We also show that under mild regularity conditions, any sequence of two-sided hypothesis tests with fixed significance level will make an infinite number of mistakes with positive probability.

Article information

Source
Electron. J. Statist., Volume 10, Number 1 (2016), 1526-1550.

Dates
Received: August 2015
First available in Project Euclid: 31 May 2016

Permanent link to this document
https://projecteuclid.org/euclid.ejs/1464710240

Digital Object Identifier
doi:10.1214/16-EJS1146

Mathematical Reviews number (MathSciNet)
MR3507372

Zentralblatt MATH identifier
06600846

Subjects
Primary: 60K35: Interacting random processes; statistical mechanics type models; percolation theory [See also 82B43, 82C43] 60K35: Interacting random processes; statistical mechanics type models; percolation theory [See also 82B43, 82C43]
Secondary: 60K35: Interacting random processes; statistical mechanics type models; percolation theory [See also 82B43, 82C43]

Keywords
Hypothesis testing Edgeworth expansions model selection p-hacking

Citation

Naaman, Michael. Almost sure hypothesis testing and a resolution of the Jeffreys-Lindley paradox. Electron. J. Statist. 10 (2016), no. 1, 1526--1550. doi:10.1214/16-EJS1146. https://projecteuclid.org/euclid.ejs/1464710240


Export citation

References

  • [1] Amghibech, S. (2006). On the Borel-Cantelli Lemma and moments., Comment. Math. Univ. Carolin. 669–679.
  • [2] Andrews, D.W.K. (1991). Heteroskedasticity and autocorrelation consistent covariance matrix estimation., Econometrica. 59 817–858.
  • [3] Bhattacharya, R. and Rao, R. (1986)., Normal Approximation and Asymptotic Expansions. Siam, Philadelphia.
  • [4] Bialik, C. (2012). How to be sure you’ve found a Higgs boson., Wall Street Journal. Retrieved from http://www.wsj.com.
  • [5] Bustos, O.H. (1982). General M-estimates for contaminated pth-order autoregressive process: consistency and asymptotic normality., Z. Wahrsch. Verw. Gebiete. 64 211–239.
  • [6] Chiani, M., Dardari, D. and Simon, M. K. (2003). New exponential bounds and approximations for the computation of error probability in fading channels., IEEE Transactions on Wireless Communications. 2 840–845.
  • [7] Cousins, R. D. (2014). The Jeffreys-Lindley paradox and discovery criteria in high energy physics., Synthese. 1–38.
  • [8] Den Haan, W. and Levin, A. (1996). A practitioner’s guide to robust covariance matrix estimation. Techincal Working Paper Series 197, NBER.
  • [9] Dembo, A. and Peres, Y. (1994). A topological criterion for hypothesis testing., The Annals of Statistics. 22 106–117.
  • [10] Fisher, R.A. (1971)[1935]., The Design of Experiments. Macmillan, New York.
  • [11] Fitts, D. (2011). Ethics and animal numbers: informal analyses, uncertain sample sizes, inefficient replications, and type I errors., Journal of the American Association for Laboratory Animal Science. 50 445–553.
  • [12] Hall, P. (1983). Inverting an Edgeworth expansion., The Annals of Statistics. 11 569–576.
  • [13] Hall, P (1987). Edgeworth expansion for Student’s t statistic under minimal moment conditions., The Annals of Probability. 15 920–931.
  • [14] Hall, P (1992)., The Bootstrap and Edgeworth Expansion. Springer, New York.
  • [15] Hansen, B. (2000). Edgeworth expansions for the Wald and GMM statistics for nonlinear restrictions., Manuscript, University of Wisconsin.
  • [16] Inoue, A. and Shintani, M. (2006). Bootstrapping GMM estimators for time series., Journal of Econometrics. 131 531–555.
  • [17] Kochen, S. and Charles, A. (1964). A note on the Borel-Cantelli Lemma., Illinois J. Math. 8 248–251.
  • [18] Lahiri, S.N. (1996). Asymptotic expansions for sums of random vectors under polynomial mixing rates., Sankhya Ser. A. 58 206–224.
  • [19] Lahiri, S.N. (2010). Edgeworth expansion for studentized statistics under weak dependence., The Annals of Statistics. 38 388–434.
  • [20] Liew, V.K. (2004). Which lag length selection criteria should we employ?, Economics Bulletin. 33 1–9.
  • [21] Lindley, D. V. (1957). A statistical paradox., Biometrika. 44 187.
  • [22] Lutkepohl, H. (2005)., New Introduction to Multiple Time Series Analysis. New York: Springer.
  • [23] Lyons, L. (2013). Discovering the signifcance of 5 sigma., arXiv:1310.1284.
  • [24] Newey, W.K. and Smith, R. (2004). Higher order properties of GMM and generalized empirical likelihood estimators., Econometrica. 72 219–255.
  • [25] Qumsiyeh, M.B. (1990). Edgeworth expansion in regression models., Journal of Multivariate Analysis. 35 86–101.