Abstract
For the experiment $\mathscr{E}$, given by $\mathscr{E} = \lbrack x$ (in space $X$) distributed with probability element $f(x \mid \theta) d\mu(x), \{\theta\} = \Theta$ the parameter space, $W(d, \theta)$ a non-negative loss function for decision $d \epsilon D\rbrack$, the definition of a (non-randomized) strict Bayes decision function (BDF) can, following Wald [7], be stated: $\delta_\pi$, given by the decision function $d_\pi(x)$, is a strict BDF with respect to the proper prior probability measure $\pi$ on $\Theta$ if $\delta_\pi$ minimizes $R(\pi, \delta) = \int d\pi(\theta)r(\theta, \delta) = \int d\pi(\theta) \int d\mu(x)W\lbrack d(x), \theta\rbrack f(x \mid \theta)$ [requiring, of course, that $R(\pi, \delta_\pi) < \infty\rbrack$. (As is well known, randomized decision functions can be excluded from standard Bayesian methods. We will not here attempt to justify their exclusion in the below deviations from the standard Bayesian formulations.) Relaxing the restriction that the prior measure be proper, we have: DEFINITION 1.1. $\delta_m$ is a normal generalized BDF (NGBDF) with respect to the generalized prior $m$ on $\Theta$ if $\delta_m$ minimizes $R(m, \delta)$ [requiring, of course, that $R(m, \delta_m) < \infty\rbrack$. DEFINITION 1.2. $\delta_m$ is an extensive generalized BDF (EGBDF) with respect to $m$ if $\delta_m = \{d_m(x)\}$ where $d_m(x)$ minimizes $\int dm(\theta)W(d, \theta)f(x) \mid \theta)$ a.e. $\mu$. (The use of the epithets "extensive" and "normal" is consistent with their use in Raiffa and Schlaifer [5]. Definition 1.2 differs from the definition of Sacks [6] in not requiring the finiteness of $\int dm(\theta)f(x \mid \theta)$.) Theorem 2.1 shows that if $\delta_m$ is an NGBDF then it is also an EGBDF. For some cases in which $\min_\delta R(m, \delta) = \infty$, the following generalization of EGBDF is useful: DEFINITION 1.3. $\delta_m^\ast$ is a comparative generalized BDF (CGBDF) with respect to $m$ if the quantity $\Delta_m(\delta, \delta_m^\ast)$ defined by $\Delta_m(\delta, \delta^\ast_m) = \int dm(\theta)\lbrack r(\theta, \delta) - r(\theta, \delta_m\ast)\rbrack$ is non-negative for all $\delta$. Admissibility considerations are unaffected by multiplying $W(d, \theta)$ by an arbitrary positive function of $\theta$. So, since the above definitions involve $m$ and $W$ in the composite element $W(d, \theta) dm(\theta)$, it is clear that any general sufficient condition for admissibility of $\delta_m$ or $\delta^\ast_m$ for $m$ proper may also be stated for $m$ general (that is, possibly improper). (The same point is made by Stein [4], p. 232.) Theorem 3.1 gives such a sufficient condition, suggested by the Lehmann-Blyth technique for proving admissibility [1], while Corollary 3.1 is an extension of the well known admissibility of strict BDF's under certain conditions. The above ideas are applied in Section 4 to the estimation with quadratic loss of the mean of the one dimensional exponential family. The very close links with Karlin's technique [3] are immediately apparent. The application clarifies Karlin's remark (p. 411 of [3]) that his results may be regarded as a refinement of the Lehmann-Blyth technique and also, finally, lends some support to his conjecture (p. 415 of [3]) that a certain condition for admissibility of the contracted estimator $\gamma x, 0 < \gamma \leqq 1$, is necessary as well as sufficient. An associated reference is Cheng Ping [2].
Citation
M. Stone. "Generalized Bayes Decision Functions, Admissibility and the Exponential Family." Ann. Math. Statist. 38 (3) 818 - 822, June, 1967. https://doi.org/10.1214/aoms/1177698876
Information