Open Access
August, 1971 Identifying Probability Limits
Gordon Simons
Ann. Math. Statist. 42(4): 1429-1433 (August, 1971). DOI: 10.1214/aoms/1177693254

Abstract

The literature of mathematical statistics is filled with theorems on (weakly) consistent estimators. Even though most statisticians want stronger evidence of an estimator's worth, these theorems have provided some comfort for the applied statistician. In this paper, we begin an investigation into the concept of consistency and, more specifically, investigate the extent to which a consistent sequence of estimators identifies the parameter they estimate. It will be recalled that for any sequence of random variables which converge in probability to a limit, there is a subsequence which converges almost surely to that limit. This would seem to suggest that if one is given a consistent sequence of estimators $\hat{\phi}_1, \hat{\phi}_2, \cdots$ converging to $\phi(\theta) (\theta \in \Theta)$, say, then one can find a subsequence which converges almost surely. That is, whenever there exists a weakly consistent sequence of estimators there exists a strongly consistent sequence as well. Unfortunately, the specific subsequence may depend upon the unknown parameter value $\theta$. Still, an applied statistician might be able to choose sequentially which observed estimators to include in a (random) subsequence. This is easily seen to be equivalent to postulating the existence of functions $g_n(x_1, \cdots, x_n)(n \geqq 1)$ such that $g_n(\hat{\phi}_1, \cdots, \hat{\phi}_n)$ converges to $\phi(\theta)$ with probability one $(\theta\in\Theta)$. In Section 2, we will show that such functions do not always exist. It seems appropriate, therefore, to question whether the values of the entire sequence $\hat{\phi}_1, \hat{\phi}_2, \cdots$ (always) allow one to determine the value of $\phi(\theta)$ with probability one. We prefer the following mathematical reformulation of this question: Let $(\Omega, \mathscr{A}, P)$ be a probability space and $\mathscr{E}$ denote the set of infinite dimensional random vectors $\mathscr{X} = (X_1, X_2, \cdots)$ defined on this space whose coordinates converge in probability to a random variable (which we shall continue to denote as) $p^\mathscr{X}$. The question becomes: Does there always exist a function $f$ which maps $R^\infty$ (infinite-dimensional Euclidean space) into $\mathbf{R}$ (the reals) such that for every $\mathscr{X}\in\mathscr{E}$, the set \begin{equation*}\tag{1} \lbrack f(\mathscr{X}) \neq p\mathscr{X} \rbrack \text{is contained in a null set of} \mathscr{A}?\end{equation*} We shall refer to any function $f$ which satisfies (1) for all $\mathscr{X} \in\mathscr{F} \subset \mathscr{E}$ as a probability limit identification function (PLIF) on $\mathscr{F}$. We partially justify the reformulation as follows: In Section 3, we will show that the vector of estimators $(\hat{\phi}, \hat{\phi}_2,\cdots)$ can be interpreted as defined on the same probability space for every $\theta \in \Theta$. As such, consistent estimators are equivalent to a family of vectors $\mathscr{F} = \{\mathscr{X}_\theta, \theta \in \Theta\} \subset \mathscr{E}$. Identifying $\phi(\theta)$ becomes equivalent to showing that there exists a PLIF on $\mathscr{F}$. In Section 4, we show that there exists a PLIF on $\mathscr{E}$ if there exists a PLIF on $\mathscr{E}^\ast$, the set of $\mathscr{X} \in \mathscr{E}$ whose coordinates are Bernoulli variables and whose probability limit $p\mathscr{X}$ is almost surely a constant (necessarily zero or one). With values of $\theta$ corresponding to vectors $\mathscr{X} \in \mathscr{E}^\ast$ and $\phi(\theta)$ corresponding to $p\mathscr{X}$, the reformulation becomes complete. We do not know whether a PLIF always exists on $\mathscr{E}^\ast$ except for certain elementary probability spaces. It is hoped that the current paper will stimulate further research into this question. If they do not always exist, this will cast further doubt on the importance of consistency. Breiman, LeCam and Schwartz [2] have discussed an interesting problem whose formulation is closely related to the current one. They assume that they have a family of probability measures $\{P_\theta(\cdot), \theta \in \Theta\}$ each defined on the same measurable space $(\Omega, \mathscr{A})$ (with points $\omega \in \Omega$). They assume that $\phi(\theta)$ is measurable with respect to a $\sigma$-field defined on $\Theta$ and find necessary and sufficient conditions for the existence of an $\mathscr{A}$-measurable estimator $\hat{\phi}(\omega)$ for which $P_\theta\{\hat{\phi}(\omega) = \phi(\theta)\} = 1\quad \text{for all} \theta \in \Theta$ (and also for a closely related condition). The question of the existence of a measurable PLIF on $\mathscr{E}^\ast$ translates into the question of the existence of a certain "zero-one set" in their context. Skibinski [3] has connected their work on zero-one sets with some work of Bahadur [1].

Citation

Download Citation

Gordon Simons. "Identifying Probability Limits." Ann. Math. Statist. 42 (4) 1429 - 1433, August, 1971. https://doi.org/10.1214/aoms/1177693254

Information

Published: August, 1971
First available in Project Euclid: 27 April 2007

zbMATH: 0223.62040
MathSciNet: MR345171
Digital Object Identifier: 10.1214/aoms/1177693254

Rights: Copyright © 1971 Institute of Mathematical Statistics

Vol.42 • No. 4 • August, 1971
Back to Top