Open Access
December, 1966 Sequential Estimation of the Mean of a Log-Normal Distribution Having a Prescribed Proportional Closeness
S. Zacks
Ann. Math. Statist. 37(6): 1688-1696 (December, 1966). DOI: 10.1214/aoms/1177699158

Abstract

It is a common practice, in engineering and applied sciences, to ask for an estimator of a parameter of a statistical distribution which, with high probability, does not deviate from the value of the parameter by more than a certain percentage of its absolute value. In other words, if $\theta$ is a parameter under consideration, and $\hat{\theta}$ is an estimator of $\theta$ it is required that, for given $0 < \delta < 1$, and $0 < \gamma < 1$, \begin{equation*}\tag{0.1}P_\theta\lbrack|\hat{\theta} - \theta| < \delta|\theta|\rbrack \geqq \gamma,\quad\text{for all} \theta.\end{equation*} This probability is called the proportional closeness of $\hat\theta$ (see Ehrenfeld and Littauer [5] p. 339). In the present paper we study the problem of estimating the mean of a log-normal distribution, by a procedure which guarantees a prescribed proportional closeness. When the variance, $\sigma^2$, of the corresponding normal distribution is known, there is an efficient fixed sample estimation procedure having the required closeness property. The sample size required in this case is, \begin{equation*}\tag{0.2}n_0 = \text{smallest integer} \geqq \chi_\gamma^2\lbrack 1\rbrack\sigma^2 \log^{-2} (1 + \delta),\end{equation*} where $\chi_\gamma^2\lbrack 1\rbrack$ denotes the $\gamma$th fractile of the $\chi^2$-distribution, with 1 degree of freedom. As indicated in the sequel, there is no such fixed sample procedure, when $\sigma^2$ is unknown. The prescribed closeness property can be, however, guaranteed if the estimation is based on at least two stages of sampling. The properties of two sequential estimation procedures, which asymptotically (as $\delta \rightarrow 0$) guarantee the prescribed proportional closeness, are presented in the present paper. One procedure is based on the maximum likelihood estimator of the mean, and is called the sequential M.L. procedure. The other procedure is based on the sample mean, and is called the sequential S.M. procedure. Let $v_n$ denote the maximum likelihood estimator of $\sigma^2$. The stopping rule for the sequential M.L. procedure defines the sample size, $K$, to be the first integer $k \geqq 2$, for which the following inequality is satisfied: \begin{equation*}\tag{0.3}v_k \leqq (1 + (2/c_k)k \log^2 (1 + \delta))^{\frac{1}{2}} - 1,\quad 0 < \delta < 1\end{equation*} where $\{c_k\}$ is a sequence of bounded, positive constants, approaching $\chi_\gamma^2\lbrack 1\rbrack$ as $k \rightarrow \infty$. The sample size, $N$, in the sequential S.M. procedure, is the first integer $n \geqq 2$, such, \begin{equation*}\tag{0.4}v_n \leqq \log (1 + (\delta/c_n)n),\quad 0 < \delta < 1\end{equation*} It is proven that both stopping rules, (0.3) and (0.4), yield well-defined stopping variables, which are decreasing functions of $\delta$, and have finite expectations for every $0 < \delta < 1$. The asymptotic orders of magnitude (a.s.) of $K$ and of $N$ are given, as well as the asymptotic order of magnitudes of their expectations (as $\delta \rightarrow 0$). It is shown that the efficiency of the sequential S.M. procedure, relative to that of the sequential M.L. procedure, decreases to zero as $\sigma^2 \rightarrow \infty$. That is, \begin{equation*}\tag{0.5}\lim_{\delta \rightarrow 0, \sigma^2 \rightarrow \infty} E\{K\}/E\{N\} = 0.\end{equation*} Moreover, $\lim_{\delta \rightarrow \infty} E\{K\}/E\{N\} < 1$ for all $0 < \sigma^2 < \infty$. This result establishes the uniform asymptotic superiority (with respect to all $0 < \sigma^2 < \infty$) of the sequential M.L. procedure over the sequential S.M. procedure. The sequential M.L. procedure studied in the present paper is not, however, asymptotically efficient in the Chow-Robbins sense. Chow and Robbins defined in [3] a sequential procedure to be asymptotically efficient, if \begin{equation*}\tag{0.6}\lim_{\delta \rightarrow 0}\frac{E\{\text{sample size in sequential procedure}\}}{\text{minimum sample size required for} \sigma^2 \text{known}} = 1.\end{equation*} It is proven that, for the sequential M.L. procedure \begin{equation*}\tag{0.7}\lim_{\delta \rightarrow 0} E\{K\}/n_0 = 1 + \frac{1}{2}\sigma^2,\quad 0 < \sigma^2 < \infty.\end{equation*} This limit is always greater than 1, and approaches infinity as $\sigma^2 \rightarrow \infty$. A sequential procedure for the log-normal case, which satisfies asymptotically the prescribed closeness condition, and is asymptotically efficient in the Chow-Robbins sense is still unavailable. The reason for this shortcoming is that we have actually to determine a fixed-width confidence interval for $(\mu + \frac{1}{2}\sigma^2)$, where $(\mu, \sigma^2)$ are the mean and variance of the normally distributed $Y = \log X$. Chow and Robbins [3], Gleser, Robbins and Starr [6], and Starr [7] show that a sequential estimation of the mean, based on the sample mean and the sample variance, provides an asymptotically efficient fixed width confidence procedure. This result is in contrast to the main result of the present paper, which shows that a prescribed proportional closeness sequential estimator of the meanof a log-normal distribution based on the sample mean is inefficient, and that there exists a more efficient sequential procedure, which is the one based on the M.L. estimator.

Citation

Download Citation

S. Zacks. "Sequential Estimation of the Mean of a Log-Normal Distribution Having a Prescribed Proportional Closeness." Ann. Math. Statist. 37 (6) 1688 - 1696, December, 1966. https://doi.org/10.1214/aoms/1177699158

Information

Published: December, 1966
First available in Project Euclid: 27 April 2007

zbMATH: 0152.17704
MathSciNet: MR203891
Digital Object Identifier: 10.1214/aoms/1177699158

Rights: Copyright © 1966 Institute of Mathematical Statistics

Vol.37 • No. 6 • December, 1966
Back to Top