The Annals of Mathematical Statistics

Optimal Strategies in Factorial Experiments

Abstract

The present study is an extension of the previous work of the authors [3], [5], in which two randomization procedures in fractional factorial experiments were investigated. The general problem is to choose, in an optimal manner, a fractional replication of a full factorial system and an estimator, for the purpose of making inferences concerning a subset of pre-assigned parameters. We consider factorial systems of order $2^m$, which consist of $m$ factors each at 2 levels. The results can be generalized for factorial systems of order $p^m$, where $p > 2$. The factorial model adopted for the present study is the same as that adopted in the previous work, and the reader is referred to the preceeding papers [3], [5] for details and properties. The first two sections of [5] are essential for the present paper. The statistical properties of the conditional least-squares estimators (c.l.s.e.) were studied in [5] with respect to two specified randomization procedures (R.P.I. and R.P.II.), which are particular types of random allocation designs. It was shown that the c.l.s.e.'s constitute a complete class of linear unbiased estimators. The c.l.s.e. can be characterized as least-square estimators adjusted for the block of treatment combinations chosen and the information available on the nuisance parameters. In the present study we extend the investigation into the comparison of different randomization procedures. We consider a general class of procedures, characterized as follows: By some confounding method we construct $M = 2^{m-s}$ blocks, each one containing $S = 2^s$ treatment combinations. We choose one of the blocks with an arbitrary probability vector, $\xi$, and observe the associated random variables. R.P.I. is the special case where every block has the same probability of being chosen. A fixed fractional replication procedure is the special case where one of the blocks is chosen with probability one. Each randomization procedure is represented (uniquely) by a probability vector, $\xi$, and each c.l.s.e. is represented by a vector $\gamma$ in $(2^m - 2^s)$-dimensional Euclidean space. A strategy of the Statistician is thus represented by a pair of vectors $(\xi, \gamma)$. Minimax strategies are studied in the present paper for two states of information concerning the nuisance parameters: (i) all the nuisance parameters are bounded; (ii) all the nuisance parameters are bounded and all their signs are known. As proven in the present paper, the minimax strategy corresponding to Case (i) consists of R.P.I. with an unadjusted c.l.s.e. and is thus independent of the actual bounds of the nuisance parameters. On the other hand, the minimax strategy for Case (ii) consists of some fixed fractional replication with an adjusted c.l.s.e. which depends on the actual bounds of the nuisance parameters. These minimax theorems are proven with respect to a mean-square-error loss function, defined as the trace of the mean square-error matrix. A closeness loss function is considered too. The closeness of an estimator is defined as the probability that the values of the estimator will lie in a prescribed neighborhood of the true values of the parameters. In Section 2 the mean-square-error matrix and the closeness of a c.l.s.e., under an arbitrary randomization procedure, $\xi$, are derived. The mean-square-error and the closeness loss functions are defined in Section 3. Formulae for mean-square-error Bayes strategies, against any given a priori distribution of the nuisance parameters, are then derived. The closeness risk function is approximated by a similar function which has the same Bayes strategies as the mean-square-error loss function. It is shown that when the size of the experiment, $S = 2^s$, grows the closeness risk function and the approximative function converge. The minimax theorems are stated and proved in Section 4 only with respect to the mean-square-error loss function. It can be shown that the minimax closeness strategies, for the states of information studied, are the same as those for the mean-square-error loss function. This can be concluded also from the results of Section 3. In Section 5 we present a numerical example to illustrate the results and the computations involved.

Article information

Source
Ann. Math. Statist., Volume 34, Number 3 (1963), 780-791.

Dates
First available in Project Euclid: 27 April 2007

https://projecteuclid.org/euclid.aoms/1177704003

Digital Object Identifier
doi:10.1214/aoms/1177704003

Mathematical Reviews number (MathSciNet)
MR154377

Zentralblatt MATH identifier
0118.34102

JSTOR