Open Access
June, 1970 On the Inference and Decision Models of Statistics
Colin R. Blyth
Ann. Math. Statist. 41(3): 1034-1058 (June, 1970). DOI: 10.1214/aoms/1177696980


The inference and decision models considered here are those described by Neyman in [5] pages 16 and 17. For a random variable $X$ with possible probability distributions indexed by a parameter $\theta$, Neyman distinguishes two inference approaches to the problem of estimating $\theta$: (i) having observed $X = x$, find the most probable values $z(x)$ of $\theta$ (This requires a priori probabilities, which often must be chosen rather arbitrarily, and whose existence may be questioned.); and (ii) having observed $X = x$, find the values $z(x)$ that are most reasonable or in which we have the greatest confidence (This requires the rather arbitrary choice of a real-valued function $L$, with $L(x, \theta)$ measuring the degree of confidence we have in the parameter value $\theta$ given that $X = x$ has been observed.). Of these two approaches, (i) is a special case of (ii) since in particular $L(x, \theta)$ can be taken to be the a posteriori probability of $\theta$ given $X = x$, for a specified a priori distribution of the parameter. Neyman remarks that in his opinion "the inferential theory solves no problem" and proceeds to describe a real world situation for which the decision model is a very good one. For a random variable $X$ with possible probability distributions indexed by a parameter $\theta$ the decision approach to estimating $\theta$ is to associate, with the use of each possible estimator $z(X)$, a random loss $W(z, X, \theta)$ whose possible distributions are indexed by $\theta$; and to determine $z$ so that this loss will, in some specified sense, be as small as possible on the whole over all $\theta$ values. The purpose of the present paper is to examine Neyman's inference model in detail, to describe real situations for which it appears to be a good model, and to compare these with situations for which the decision model is more appropriate. Mathematical models are described, but there is almost no mathematics in the sense of deriving details for the models: concern is mostly with the applied mathematics question of what mathematical model to use for a real situation. Section 2 is a detailed description of Neyman's general inference model. Here $L(x, \theta)$ is interpreted as a measure of agreement between $P_\theta$ probabilities and observed proportions; terms such as "most probable," "most reasonable," "greatest confidence" are avoided as having connotations that are difficult to support. The general inference estimator is just Wolfowitz's minimum distance estimator [12]. Wolfowitz evaluates such procedures purely from the decision viewpoint. Section 3 is a detailed description of the general decision model. This is given in a form closely paralleling the description of Section 2, in order that the two models can be compared easily. In Section 4, the inference and decision models are compared from several viewpoints. The inference problem seeks an estimator $z$ such that the $P_{z(x)}$ probabilities will be (for all $x$) close to the proportions observed in $x$; the decision problem of estimating $\theta$ seeks an estimator $z$ such that $z(X)$ will be (for all $\theta$) close to $\theta$. The decision problem requires the idea of distance or error, measured by loss, in the parameter space; this idea is completely or partially absent in the inference model, where $\theta$ merely indexes the possible probability models. It is this presence or absence of a loss function that distinguishes between the decision and inference models as defined here: in the decision problem the use of an estimator $z$ results in definite losses and we are to determine a $z$ for which they are small; in the inference model the idea of definite losses does not appear. The decision model is a much more specific model for a much more specific real problem. In both problems we want to choose a probability model for the real situation: in the decision problem we know what the model is to be used for; in the inference problem we do not. It can be reasonably argued, following Neyman [5] page 17, that the inference model is sometimes used in the mistaken belief that making $P_{z(x)}$ close to the observed proportions will make $P_{z(X)}$ close to $P_\theta$, and that if so it should be replaced by the decision model with loss $W\lbrack P_{z(X)}, P_\theta\rbrack$. In the inference problem as described here, we do not have this definite aim of making $P_{z(X)}$ close to $P_\theta$: we are uncertain as to whether we want $P_{z(X)}$ close to $P_\theta$ or want $z(X)$ close to $\theta$ or want $\lbrack z(X)\rbrack^2$ close to $\theta^2$ or want something else; vaguely, we have all decision-type aims, but we have no definite one. Both the inference and decision models require the making of somewhat analogous and rather arbitrary choices at two levels. (Here, and throughout this paper "arbitrary" is used in its primary dictionary meaning of "depending on will or discretion; discretionary; can be freely chosen," with none of the secondary dictionary meanings of "unreasoned, despotic." Possible synonyms such as "subjective," "individualistic," "personalistic" are avoided because of their technical meanings.) Neyman's view that the user should be fully entitled to any choices he cares to make and that no attempt should be made to impose particular choices on all, is followed here. In Section 5 the most commonly used inference methods (Likelihood, Least Squares and Moments, Chi-square, Kolmogorov-Smirnov) are examined as special cases of the general inference method.


Download Citation

Colin R. Blyth. "On the Inference and Decision Models of Statistics." Ann. Math. Statist. 41 (3) 1034 - 1058, June, 1970.


Published: June, 1970
First available in Project Euclid: 27 April 2007

zbMATH: 0198.23201
MathSciNet: MR266348
Digital Object Identifier: 10.1214/aoms/1177696980

Rights: Copyright © 1970 Institute of Mathematical Statistics

Vol.41 • No. 3 • June, 1970
Back to Top