Abstract
A new method of hypothesis testing is proposed ensuring that as the sample size grows, the probability of a type I error will become arbitrarily small by allowing the significance level to decrease with the number of observations in the study. Furthermore, the corresponding sequence of hypothesis tests will make only a finite number of errors with probability one, under mild regulatory conditions, including both i.i.d. and strongly mixing processes. It can be used as an alternative to arbitrary fixed significance levels such as $0.05$ or $0.01$.
This approach resolves the Jeffreys-Lindley paradox. It is also robust to multiple comparisons, and to a lesser extent, optional stopping making it somewhat robust to data fishing or p-hacking.
As an example of the practical applications of this technique, it is used as a lag order selection mechanism in simulations. It performs well relative to other model selection criteria. In another simulation, hypothesis tests and confidence intervals for the mean are investigated, demonstrating the improved performance even in small samples. We also show that under mild regularity conditions, any sequence of two-sided hypothesis tests with fixed significance level will make an infinite number of mistakes with positive probability.
Citation
Michael Naaman. "Almost sure hypothesis testing and a resolution of the Jeffreys-Lindley paradox." Electron. J. Statist. 10 (1) 1526 - 1550, 2016. https://doi.org/10.1214/16-EJS1146
Information