On the basis of independent samples $\{X_1, \cdots, X_m\}$ and $\{Y_1, \cdots, Y_n\}$ with distributions $F$ and $G$, respectively, the hypothesis that $F \equiv G$ may be tested. Given the functional forms $F(x_1, \cdots, x_m)$ and $G(y_1, \cdots, y_n)$ of the sampling distributions except for values of certain parameters, the likelihood ratio approach, for example, can be used. In this case it is not crucial to assume that the samples are random, i.e., that $F(x_1, \cdots, x_m) = F(x_1) \cdots F(x_m)$ and $G(y_1, \cdots, y_n) = G(y_1) \cdots G(y_n)$, although such a simplification is useful whenever realistic. However, the nonparametric treatment of the problem has relied heavily on the assumption of random samples. Yet if the samples arise as realizations of two stochastic processes, the assumption of randomness is not realistic except in the case of renewal processes. Thus it is desirable to extend the scope of established nonparametric procedures to more general applications. The present paper deals with the Wilcoxon two-sample statistic. Among the desirable features of this statistic, when defined on independent random samples, is its asymptotically normal distribution, which for large samples facilitates a test of the hypothesis that $F \equiv G$ and a calculation of the power for any alternative $(F, G)$. It shall be seen that these aspects are true also when the samples arise from stochastic processes belonging to a wide class, including strictly stationary strongly mixing processes. Assume that the samples $\{X_1, \cdots, X_m\}$ and $\{Y_1, \cdots, Y_n\}$ are independent of each other, but let the random variables within a sample be possibly dependent. Assume that the functions $F(\cdot)$ and $G(\cdot)$ are continuous. The hypothesis $H: F \equiv G$ may be tested (conservatively) by testing the hypothesis $H_0: \gamma = 0$, where $\gamma = 2P\{Y > X\} - 1$. A representation of the Wilcoxon two-sample statistic is the $U$-statistic with sign function as kernel, \begin{equation*}\tag{1.1}U = (mn)^{-1}\sum^m_{i=1} \sum^n_{j=1} s(Y_j - X_i),\end{equation*} where $s(u) = -1, 0, 1$ according as $u < 0, = 0, > 0$. Since $Es(Y - X) = \gamma$, the statistic $U$ affords a natural basis for testing $H_0$. Under appropriate conditions, the statistic $Z = m^{\frac{1}{2}}(U - \gamma)$ has a limiting normal distribution with mean 0 and variance \begin{equation*}\tag{1.2}A^2 = 4 \lim_{k\rightarrow\infty} k^{-1} \operatorname{Var}\lbrack\sum^k_{i=1} G(X_i)\rbrack + 4c \lim_{k\rightarrow\infty}k^{-1} \operatorname{Var}\lbrack\sum^k_{i=1} F(Y_i)\rbrack,\end{equation*} as $m$ and $n \rightarrow \infty$ such that $m/n$ has a limit $c \neq 0$. The main conclusions of this nature are given in Theorems 3.1 and 3.2. Some areas of application are indicated in Section 4. The business of dealing with the quantity $A^2$ is discussed in Section 5. The limiting behavior of $Z$ is obtained by consideration of a statistic asymptotically equivalent in distribution but more amenable to the direct application of central limit theory, an approach put forth by Hoeffding [3] in dealing with a wide class of $U$-statistics as defined on a single sample of mutually independent rv's. The present contribution adapts the method to a single, but important, (two-sample) $U$-statistic with dependence allowed within samples. Define: \begin{equation*}\tag{1.3}W = m^{-\frac{1}{2}} \sum^m_{i=1} \lbrack f_{10}(X_i) - \gamma\rbrack + m^{\frac{1}{2}}n^{-1} \sum^n_{j=1} \lbrack f_{01} (Y_j) - \gamma\rbrack,\end{equation*} where $f_{10}(t) = Es(Y - t) = 1 - 2G(t)$ and $f_{01}(t) = Es(t - X) = 2F(t) - 1$. Since $Ef_{10}(X) = Ef_{01}(Y) = \gamma$, we have $EW = E(Z - W) = 0$. In Section 2 we find conditions such that $E(Z - W)^2 \rightarrow 0$, in which case it follows by Chebyshev's inequality that $(Z - W) \rightarrow 0$ in probability and hence that the statistics $Z$ and $W$ have the same limiting distribution (if any). The application of central limit theory to $W$ is through the sums $\sum^m_1 f_{10}(X_i)$ and $\sum^n_1 f_{01}(Y_j)$, or equivalently through $m^{-\frac{1}{2}} \sum^m_1 G(X_i)$ and $n^{-\frac{1}{2}}\sum^n_1 F(Y_j)$. If each of these independent normed sums has a limiting normal distribution, then $W$ is asymptotically normal, as $m$ and $n \rightarrow \infty$ such that $m/n \rightarrow c \neq 0$. Relevant central limit theorems for sums of dependent variables are utilized in Section 3.

Ann. Math. Statist.
39(4):
1202-1209
(August, 1968).
DOI: 10.1214/aoms/1177698245