Open Access
Translator Disclaimer
March, 1954 Spacing of Information in Polynomial Regression
A. de la Garza
Ann. Math. Statist. 25(1): 123-130 (March, 1954). DOI: 10.1214/aoms/1177728851


The purpose of this paper is to investigate a problem in the spacing of information in certain applications of polynomial regression. It is shown that for a polynomial of degree $m,$ the variance-covariance matrix of the estimated polynomial coefficients given by a spacing of information at more than $m + 1$ values of the sure variate can always be attained by spacing the same information at only $m + 1$ values of the sure variate, these spacing values being bounded by the first spacing values. The presented results are of use in experimental design involving polynomial regression when a choice of sure variate values is possible but restricted to a specified range. Let the polynomial under consideration be \begin{equation*}\tag{1.1} P(x) = \alpha_1 + \alpha_2x + \cdots + \alpha_{m+1}x^m, m \geqq 1,\end{equation*} and let $P(x_\epsilon) = y(x_\epsilon) + \delta_\epsilon, \epsilon = 1(1)N, N \geqq (m + 1).$ The $y(x_\epsilon)$ are observed uncorrelated variates with random error $\delta_\epsilon$ having mean zero and finite variance $\sigma^2_\epsilon > 0.$ The $x_\epsilon$ are observed variates without error, there being at least $(m + 1)$ distinct $x_\epsilon.$ The following notation is introduced. Let $\overset{\rightarrow}{x} = (1, x, x^2, \cdots, x^m), X = (\overset{\rightarrow}{x}_\epsilon), \epsilon = 1(1)N,$ and let $W$ be the $N \times N$ diagonal matrix with entry $w_\epsilon = 1/\sigma^2_\epsilon$ in the $(\epsilon, \epsilon)$ position. $w_\epsilon$ will henceforth be referred to as the "information" of $y(x_\epsilon),$ and $Q = \sum w_\epsilon, \epsilon = 1(1)N,$ will be referred to as the "total information." The matrix $X'WX$ will be called the "information matrix." The problem is to show that given a spacing of total information $Q$ at locations $x_\epsilon, \epsilon = 1(1)N, N \geqq (m + 1),$ there being at least $(m + 1)$ distinct $x_\epsilon,$ it is always possible to re-space $Q$ at $(m + 1)$ distinct locations $r_j, j = 1(1)(m + 1),$ in such a manner that $\min x_\epsilon \leqq r_j \leqq \max x_\epsilon, \epsilon = 1(1)N, j = 1(1)(m + 1),$ and $X'WX = R'UR$, with $R'UR$ being the information matrix of the re-spacing. The problem is solved by prescribing a method for finding the required $U$ and $R$ which determine the spacing of the total information. The motivation for the problem is as follows. In experimentation in the chemical engineering industry, we most often have control over our sure variates. The sure variate $x$ could be the pressure level of our process equipment, and we would be permitted to choose any operating pressure $x$ in the pressure range $\min x$ to $\max x,$ tolerated by our equipment. Quite often, and in particular with isotopic measurements, laboratory analytical determinations are required for our $y$-variates with the laboratory being the major source of error. With each laboratory determination having variance $\sigma^2,$ we can request $n_x$ determinations on the material sample taken at sure variate $x$. Using the average of the laboratory determinations, the corresponding $y$ variate has variance $\sigma^2/n_x.$ Specifying $Q$ then amounts to specifying total laboratory effort expended on the experiment. It might be set by such usual factors as the dollar allowance on the experiment; if the material is highly radioactive, it might be set by such unusual factors as exposure time allowed the laboratory analysts. Furthermore, in experimentation with fairly large equipment, it is important to minimize the distinct levels of operation, that is, the distinct number of sure $x$'s. The time required to make the change and to reach sufficient equilibrium representing steady-state operation of the process is often long. In any case we lose time, and with production line equipment, we also lose production. These are the reasons for minimizing the distinct number of sure $x$'s in the experiment. The equivalence $X'WX = R'UR$ gives the required minimization. If the functional relationship between $y$ and $x$ is adequately represented by a polynomial of degree $m$, the equivalence assures that only $(m + 1)$ distinct sure $x$'s are required to maintain the same efficiency of statistical evaluation of the experimental results, since most statistical evaluation will require $(X'WX)^{-1},$ which can now be replaced by $(R'UR)^{-1}.$ It may be seen that such experiments, common in physico-chemical industry, present a formulation and require a mathematical model not found in ordinary regression theory, where usually it is not possible to assign various values to the corresponding $y$ variances. With the indicated background in mind, the results of this paper find application in experimental design. The determination of a spacing which optimizes some criteria involving the information matrix is made simpler. A familiar example arising in point estimation is minimizing $\overset{\rightarrow}{p}(X'WX)^{-1}\overset\rightarrow{p}'$ for a specified row vector $\overset\rightarrow{p}.$ An example from interpolation is minimizing the maximum of $\overset{\xi}(X'WX)^{-1}\overset{\Xi}'$ with $\overset{\Xi} = (1, \Xi, \Xi^2 \cdots \xi^m)$ and $\min x_\epsilon \leqq \xi \leqq \max x_\epsilon;$ the extrapolation problem is similar. The advantage of applying the above result to such problems is that the spacing of information is at once reduced to $(m + 1)$ distinct locations, any larger number being unnecessary. The matrix $X$ then is the matrix of a Vandermonde determinant. The properties of these matrices are well known and attractive. These uses will be illustrated by an example given in Section 4.


Download Citation

A. de la Garza. "Spacing of Information in Polynomial Regression." Ann. Math. Statist. 25 (1) 123 - 130, March, 1954.


Published: March, 1954
First available in Project Euclid: 28 April 2007

zbMATH: 0055.13206
MathSciNet: MR60777
Digital Object Identifier: 10.1214/aoms/1177728851

Rights: Copyright © 1954 Institute of Mathematical Statistics


Vol.25 • No. 1 • March, 1954
Back to Top