## The Annals of Statistics

### Nonparametric analysis of covariance

#### Abstract

In the problem of testing the equalityof k regression curves from independent samples, we discuss three methods using nonparametric estimators of the regression function. The first test is based on a linear combination of estimators for the integrated variance function in the individual samples and in the combined sample. The second approach transfers the classical one-way analysis of variance to the situation of comparing non-parametric curves, while the third test compares the differences between the estimates of the individual regression functions by means of an $L^2$-distance.We prove asymptotic normality of all considered statistics under the null hypothesis and local and fixed alternatives with different rates corresponding to the various cases. Additionally,consistency of a wild bootstrap version of the tests is established. In contrast to most of the procedures proposed in the literature, the methods introduced in this paper are also applicable in the case of different design points in each sample and heteroscedastic errors. A simulation studyis conducted to investigate the finite sample properties of the new tests and a comparison with recently proposed and related procedures is performed.

#### Article information

Source
Ann. Statist., Volume 29, Number 5 (2001), 1361-1400.

Dates
First available in Project Euclid: 8 February 2002

https://projecteuclid.org/euclid.aos/1013203458

Digital Object Identifier
doi:10.1214/aos/1013203458

Mathematical Reviews number (MathSciNet)
MR1873335

Zentralblatt MATH identifier
1043.62033

Subjects
Primary: 62G05: Estimation

#### Citation

Dette, Holger; Neumeyer, Natalie. Nonparametric analysis of covariance. Ann. Statist. 29 (2001), no. 5, 1361--1400. doi:10.1214/aos/1013203458. https://projecteuclid.org/euclid.aos/1013203458

#### References

• Alcal´a, J. T., Christ ´obal, J. A. and Gonz´alez-Manteiga, W. (1999). Goodness-of-fit test for linear models based on local polynomials. Statist. Probab. Lett. 42 39-46.
• Azzalini, A. and Bowman, A. W. (1993). On the use of nonparametric regression for checking linear relationships. J. Roy. Statist. Soc. Ser. B 55 549-557.
• Berger, J. O. and Delampady, M. (1987). Testing precise hypotheses. Statist. Sci. 2 317-352.
• Bickel, P. J. and Freedman, D. A. (1981). Some asymptotic theory for the bootstrap. Ann. Statist. 9 1196-1217.
• Biedermann, S. and Dette, H. (2000). Testing linearityof regression models with dependent errors bykernel based methods. Test 3 417-438.
• Delgado, M. A. (1993). Testing the equalityof nonparametric regression curves. Statist. Probab. Lett. 17 199-204.
• Dette, H. and Munk, A. (1998). Nonparametric comparison of several regression functions: exact and asymptotic theory. Ann. Statist. 26 2339-2368.
• Dette, H. and Neumeyer, N. (1999). Nonparametric analysis of covariance. Technical Report 262, Dept. Math., Ruhr-Universit¨at, Bochum, Germany.
• Fan, J. (1992). Design-adaptive nonparametric regression. J. Amer. Statist. Assoc. 87 998-1004.
• Fan, J. and Gijbels, I. (1996). Local Polynomial Modelling and Its Applications. Chapman and Hall, London.
• Gasser, T., M ¨uller, H.-G. and Mamritzsch, V. (1985). Kernels for nonparametric curve estimation. J. Roy. Statist. Soc. Ser. B 47 238-252.
• Gonz´alez-Manteiga, W. and Cao, R. (1993). Testing hypothesis of general linear model using nonparametric regression estimation. Test 2 161-189.
• H¨ardle, W. and Mammen, E. (1993). Comparing nonparametric versus parametric regression fits. Ann. Statist. 21 1926-1947.
• H¨ardle, W. and Marron, J. S. (1990). Semiparametric comparison of regression curves. Ann. Statist. 18 63-89.
• Hall, P. and Hart, J. D. (1990). Bootstrap test for difference between means in nonparametric regression. J. Amer. Statist. Assoc. 85 1039-1049.
• Hall, P. and Marron J. S. (1990). On variance estimation in nonparametric regression. Biometrika 77 415-419.
• Hall, P., Huber, C. and Speckman, P. L. (1997). Covariate-matched one-sided tests for the difference between functional means. J. Amer. Statist. Assoc. 92 1074-1083.
• Hjellvik, V. and Tjøstheim, D. (1995). Nonparametric tests of linearityfor time series. Biometrika 82 351-368.
• de Jong, P. (1987). A central limit theorem for generalized quadratic forms. Probab. Theory Related Fields 75 261-277.
• King, E. C., Hart, J. D. and Wehrly, T. E. (1991). Testing the equalityof regression curves using linear smoothers. Statist. Probab. Lett. 12 239-247.
• Kulasekera, K. B. (1995). Comparison of regression curves using quasi residuals. J. Amer. Statist. Assoc. 90 1085-1093.
• Kulasekera, K. B. and Wang, J. (1997). Smoothing parameter selection for power optimalityin testing regression curves. J. Amer. Statist. Assoc. 92 500-511.
• Mack, Y. P. and Silverman, B. W. (1982). Weak and strong uniform consistencyof kernel regression estimates. Z. Wahrsch. Verw. Gebiete 61 405-415.
• Mallows, C. L. (1972). A note on asymptotic joint normality. Ann. Math. Statist. 43 508-515.
• Nadaraya, E. A. (1964). On estimating regression. Theory Probab. Appl. 10 186-190.
• Neumeyer, N. (1999). Nichtparametrische Tests auf Gleichheit von Regressionsfunktionen. Diplomarbeit, Ruhr-Universit¨at, Bochum, Germany.
• Rice, J. (1984). Bandwidth choice for nonparametric regression. Ann. Statist. 12 1215-1230.
• Rosenblatt, M. (1975). A quadratic measure of deviation of two-dimensional densityestimates and a test of independence. Ann. Statist. 3 1-14.
• Sacks, J. and Ylvisaker, D. (1970). Designs for regression problems for correlated errors. Ann. Math. Statist. 41 2057-2074.
• Staudte, R. G. and Sheater, S. J. (1990). Robust Estimation and Testing. Wiley, New York.
• Wand, M. P. and Jones, M. C. (1995). Kernel Smoothing. Chapman and Hall, London.
• Watson, G. S. (1964). Smooth regression analysis. Sankhy¯a Ser. A 26 359-372.
• Young, S. G. and Bowman, A. W. (1995). Non-parametric analysis of covariance. Biometrics 51 920-931.