The Annals of Mathematical Statistics

Sign and Wilcoxon Tests for Linearity

Richard A. Olshen

Abstract

This paper introduces two tests of linearity against convexity in regression. In the first, the test statistic is the number of positive signs of second differences computed from certain of the observations. In the second, a Wilcoxon statistic is computed from those differences. Possible competitors of these tests are the usual least-squares $t$-test applied to regression coefficients, Mood's median test [12], and Hill's $R$ test [6]. Certainly the first of these is to be preferred when errors are independent and normally distributed with common variance, and the alternative is quadratic regression. The sign test to be introduced here is simpler to compute than any of these other three tests, and the Wilcoxon test is also rather simple to compute. Both can be criticized in that their test statistics are calculated from certain randomly chosen observations. The tests based on second differences are compared with the $t$-test when the alternative is quadratic regression and errors are continuously and symmetrically distributed. To be precise, in the model \begin{equation*}\tag{1}Y_i = gX^2_i + bX_i + a + \epsilon_i\end{equation*} for $i = 1, \cdots, N$, we shall compare tests of $H_0:g = 0$ against $H_1:g > 0; a, b$, and $g$ are unspecified, and the $\epsilon$'s are independent, with identical distributions which are symmetric about their mean value of zero and have (unknown) variance $\sigma^2$. The criterion whereby tests are compared is Pitman efficiency, which is defined as follows. Suppose $\theta$ is an unknown real parameter of a probability distribution $H_\theta$. Suppose further that for each positive integer $N, A_N$ and $A^\ast_N$ are two size $\alpha (0 < \alpha < 1)$ tests of the null hypothesis $\theta = \theta_0$ against the alternative $\theta > \theta_0$ based on a random sample of size $N$ from $H_\theta$. Let $\beta_N(\theta)$ and $\beta^\ast_N(\theta)$ be the respective power functions, $\beta$ be a fixed number in $(\alpha, 1), \xi_N$ be a sequence of numbers for which $\xi_N \downarrow \theta$, and $M_1(\xi_N)\lbrack M_2(\xi_N)\rbrack$ be the least integer for which $\beta_{M_1}(\xi_N) \geqq \beta\lbrack\beta^\ast_{M_2}(\xi_N) \geqq \beta\rbrack$. The Pitman efficiency of $A_N$ relative to $A^\ast_N$ for the sequence of alternatives $\xi_N$ is defined to be the $\lim_{N \rightarrow \infty} M_2(\xi_N)/M_1(\xi_N)$ provided that limit exists and does not depend on $\alpha$ and $\beta$ beyond the requirement $0 < \alpha < \beta < 1$. This definition differs from some others which are commonly used (see, for example, [8]). Various technical facts concerning the Pitman efficiency of one-sample tests are discussed in an appendix, which may be of independent interest.

Article information

Source
Ann. Math. Statist., Volume 38, Number 6 (1967), 1759-1769.

Dates
First available in Project Euclid: 27 April 2007

https://projecteuclid.org/euclid.aoms/1177698610

Digital Object Identifier
doi:10.1214/aoms/1177698610

Mathematical Reviews number (MathSciNet)
MR217969

Zentralblatt MATH identifier
0227.62030

JSTOR