Open Access
December, 1952 A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations
Herman Chernoff
Ann. Math. Statist. 23(4): 493-507 (December, 1952). DOI: 10.1214/aoms/1177729330


In many cases an optimum or computationally convenient test of a simple hypothesis $H_0$ against a simple alternative $H_1$ may be given in the following form. Reject $H_0$ if $S_n = \sum^n_{j=1} X_j \leqq k,$ where $X_1, X_2, \cdots, X_n$ are $n$ independent observations of a chance variable $X$ whose distribution depends on the true hypothesis and where $k$ is some appropriate number. In particular the likelihood ratio test for fixed sample size can be reduced to this form. It is shown that with each test of the above form there is associated an index $\rho$. If $\rho_1$ and $\rho_2$ are the indices corresponding to two alternative tests $e = \log \rho_1/\log \rho_2$ measures the relative efficiency of these tests in the following sense. For large samples, a sample of size $n$ with the first test will give about the same probabilities of error as a sample of size $en$ with the second test. To obtain the above result, use is made of the fact that $P(S_n \leqq na)$ behaves roughly like $m^n$ where $m$ is the minimum value assumed by the moment generating function of $X - a$. It is shown that if $H_0$ and $H_1$ specify probability distributions of $X$ which are very close to each other, one may approximate $\rho$ by assuming that $X$ is normally distributed.


Download Citation

Herman Chernoff. "A Measure of Asymptotic Efficiency for Tests of a Hypothesis Based on the sum of Observations." Ann. Math. Statist. 23 (4) 493 - 507, December, 1952.


Published: December, 1952
First available in Project Euclid: 28 April 2007

zbMATH: 0048.11804
MathSciNet: MR57518
Digital Object Identifier: 10.1214/aoms/1177729330

Rights: Copyright © 1952 Institute of Mathematical Statistics

Vol.23 • No. 4 • December, 1952
Back to Top