The Annals of Mathematical Statistics

Linear Functions of Order Statistics

Stephen Mack Stigler

Full-text: Open access

Abstract

The purpose of this paper is to investigate the asymptotic normality of linear combinations of order statistics; that is, to find conditions under which a statistic of the form $S_n = \mathbf{\sum}^n_{i=1} c_{in}X_{in}$ has a limiting normal distribution as $n$ becomes infinite, where the $c_{in}$'s are constants and $X_{1n}, X_{2n}, \cdots, X_{nn}$ are the observations of a sample of size $n$, ordered by increasing magnitude. Aside from the sample mean (the case where the weights $c_{in}$ are all equal to $1/n$), the first proof of asymptotic normality within this class was by Smirnov in 1935 [19], who considered the case that nonzero weight is attached to at most two percentiles. In 1946, Mosteller [13] extended this to the case of several percentiles, and coined the phrase "systematic statistic" to describe $S_n$. Since the publication in 1955 of a paper by Jung [11] concerned with finding optimal weights for $S_n$ in certain estimation problems, interest in proving its asymptotic normality under more general conditions has grown. For example, Weiss in [21] proved that $S_n$ has a limiting normal distribution when no weight is attached to the observations below the $p$th sample percentile and above the $q$th sample percentile, $p < q$, and the remaining observations are weighted according to a function $J$ by $c_{in} = J(i/(n + 1))$, where $J$ is assumed to have a bounded derivative between $p$ and $q$. Within the past few years, several notable attempts have been made to prove the asymptotic normality of $S_n$ under more general conditions on the weights and underlying distribution. These attempts have employed three essentially different methods. In [1] Bickel used an invariance principle for order statistics to prove asymptotic normality when $\mathbf{\sum}_{i<tn} c_{in}$ converges to a function $J(t)$ of bounded variation and the underlying distribution $F$ has a continuous density, positive on the support of $F$. His method was quite successful in dealing with statistics which put no weight on observations below the $p$th and above the $q$th sample percentile, $p < q$, but in other cases he did not allow the more extreme observations to be weighted more than in the sample mean. More recently in [17], Shorack used a more high powered version of the same approach to obtain a stronger result, allowing much more weight on the extremes. Chernoff, Gastwirth, and Johns, in [3], employed a device of Renyi [14] and expressed $S_n$ as a linear combination of independent, exponentially distributed random variables plus a remainder term; they then showed that the sum of independent variables has a limiting normal distribution and the remainder is asymptotically negligible in the sense that it tends to zero in probability as $n$ tends to infinity. They proved asymptotic normality if the $c_{in}$'s give substantial weight to only a fixed number of percentiles and obey certain boundedness conditions elsewhere, and $F$ has a density which is continuous outside a set of measure zero, bounded away from zero on the interior of the support of $F$, and smooth in the tails. A different approach is used by Govindarajulu in [7]. He adopted a version due to LeCam of the method of Chernoff and Savage [4], which was used by Govindarajulu, LeCam, and Raghavachari in [8] to prove asymptotic normality of linear rank statistics, and for the case $c_{in} = J(i/(n + 1))$ he expressed $S_n$ as a linear combination of independent random variables plus a remainder term. Then if $J$ is absolutely continuous and both it and its derivative satisfy certain boundedness conditions at zero and one, the remainder term tends to zero in probability and $S_n$ has a limiting normal distribution. While his conditions on the weights are very restrictive (for example he does not allow substantial weight to be put on sample percentiles), his conditions on the underlying distribution are the weakest yet obtained, requiring only that the inverse of $F$ does not grow very rapidly at zero or one. Recently in [12], Moore gave a short proof of asymptotic normality, also along the lines of Chernoff and Savage [4], which permits quite general $F$, but again at the expense of stringent conditions on $J$. In this investigation we use yet another method of attacking this problem. Using a procedure due to Hajek [9], who applied it to linear rank statistics, we will represent the statistic $S_n$ as a linear combination of independent random variables, to which the usual central limit theory can be applied, plus a remainder term, and then, under quite general conditions on the weights, prove that the remainder converges to zero in mean square, rather than in the weaker sense of convergence in probability as in [3], [7], and [12]. This is accomplished by first approximating the statistic $T_n = \mathbf{\sum}^{n-b_n}_{i=b_n} c_{in}X_{in}$ by a sum of independent random variables, where $b_n$ is a sequence of integers tending to infinity slower than $n$ but faster than $\log n$ as $n$ increases, and then finding conditions under which $T_n$ approximates $S_n$ in mean square. The result is essentially stronger than that of Bickel, in that much more weight is allowed on the extreme observations; however, a smoothness condition on the distribution similar to that of Chernoff, Gastwirth, and Johns is required. For statistics of the form $T_n$ the result is essentially stronger than that of Chernoff, Gastwirth, and Johns, although slightly more restrictive conditions are required to prove mean square equivalence of $T_n$ and $S_n$. We treat a much more general class of weights than does Govindarajulu or Moore, but our conditions on the underlying distribution are stronger than their conditions. In the course of the proof we derive asymptotic expressions for the covariances of the order statistics and the variance of $S_n$. For the case of the variance of a single order statistic $X_{in}$, the result is proved under weaker conditions on $F$ and for a wider range of $i$ than by Sen [16], Van Zwet [20], Bickel [1], or Blom [2], although the speed of convergence is slower than in [16], [20], or [2]. The asymptotic expression for covariances is similarly improved over Blom [2]. The first section of the paper describes the method to be used and states two previously known propositions which will be useful in the following sections. Section two contains the calculation of an approximation for an order statistic by a sum of independent random variables and an exact expression for the covariance of two such approximations. Asymptotic expressions for this covariance and the covariance of two order statistics are derived in section three, as well as an expression for the variance of $S_n$. Section four contains the proof of the asymptotic normality of $S_n$ when the extremes are not included and no weight is allowed for $i/(n + 1)$ near a point where the derivative of the inverse of $F$ misbehaves, and conditions are given under which these restrictions can be dropped. Finally, in section five we discuss the limitations of the method used and extend the results to the slightly more general class of statistics of the form $\mathbf{\sum}c_{in}h(X_{in})$, also considered in [3], [7], and [17].

Article information

Source
Ann. Math. Statist., Volume 40, Number 3 (1969), 770-788.

Dates
First available in Project Euclid: 27 April 2007

Permanent link to this document
https://projecteuclid.org/euclid.aoms/1177697587

Digital Object Identifier
doi:10.1214/aoms/1177697587

Mathematical Reviews number (MathSciNet)
MR264822

Zentralblatt MATH identifier
0186.52502

JSTOR
links.jstor.org

Citation

Stigler, Stephen Mack. Linear Functions of Order Statistics. Ann. Math. Statist. 40 (1969), no. 3, 770--788. doi:10.1214/aoms/1177697587. https://projecteuclid.org/euclid.aoms/1177697587


Export citation