## The Annals of Statistics

### Bounds on the Efficiency of Linear Predictions Using an Incorrect Covariance Function

Michael L. Stein

#### Abstract

Suppose $z(\cdot)$ is a random process defined on a bounded set $R \subset \mathbb{R}^1$ with finite second moments. Consider the behavior of linear predictions based on $z(t_1), \ldots, z(t_n)$, where $t_1, t_2, \cdots$ is a dense sequence of points in $R$. Stein showed that if the second-order structure used to generate the predictions is incorrect but compatible with the correct second-order structure, the obtained predictions are uniformly asymptotically optimal as $n \rightarrow \infty$. In the present paper, a general method is described for obtaining rates of convergence when the covariance function is misspecified but compatible with the correct covariance function. When $z(\cdot)$ is Gaussian, these bounds are related to the entropy distance (the symmetrized Kullback divergence) between the measures for the random field under the actual and presumed covariance functions. Explicit bounds are given when $R = \lbrack 0, 1\rbrack$ and $z(\cdot)$ is stationary with spectral density of the form $f(\lambda) = (a^2 + \lambda^2)^{-p}$, where $p$ is a known positive integer and $a$ is the parameter that is misspecified. More precise results are given in the case $p = 1$. An application of this result implies that equally spaced observations are asymptotically optimal in the sense used by Sacks and Ylvisaker in terms of maximizing the Kullback divergence between the actual and presumed models when $z(\cdot)$ is Gaussian.

#### Article information

Source
Ann. Statist. Volume 18, Number 3 (1990), 1116-1138.

Dates
First available in Project Euclid: 12 April 2007

http://projecteuclid.org/euclid.aos/1176347742

Digital Object Identifier
doi:10.1214/aos/1176347742

Mathematical Reviews number (MathSciNet)
MR1062701

Zentralblatt MATH identifier
0749.62061

JSTOR