Open Access
August, 1969 Nonparametric Estimation of the Transition Distribution Function of a Markov Process
George G. Roussas
Ann. Math. Statist. 40(4): 1386-1400 (August, 1969). DOI: 10.1214/aoms/1177697510

Abstract

In [8] the problem of nonparametric estimation in Markov processes has been considered, and estimates of the initial, two-dimensional joint, and transition densities of the process, satisfying a number of optimal properties, have been obtained. In the present paper, under the same nonparametric setup, the attention is centered primarily on the transition distribution function of the process. It will be assumed that the underlying Markov process, defined on a probability space $(\Omega, \mathscr{a}, P)$ and taking values in the real line $R$, is (strictly) stationary, and has initial, two-dimensional joint, and transition densities $p(\cdot), q(\cdot, \cdot)$, and $t(\cdot \mid x), x \varepsilon R$, respectively, relative to the appropriate Lebesgue measures. Let $K$ be a probability density. On the basis of the first $n + 1$ random variables $X_j, j = 1, 2, \cdots, n + 1$ of the process, we define the random variables $p_n(x), x \varepsilon R$, and $q_n(y), y \varepsilon R \times R$ (by suppressing the random element $\omega$) by the following relations \begin{equation*} \tag{1.1} p_n(x) = (nh)^{-1} \sum^n_{j=1} K((x - X_j)h^{-1})\end{equation*} \begin{equation*} \tag{1.2} q_n(y) = q_n(x, x') = (nh)^{-1} \sum^n_{j=1} K((x - X_j)h^{-\frac{1}{2}})K((x' - X_{j+1})h^{-\frac{1}{2}}),\end{equation*} where $h = h(n)$ is a sequence of positive constants satisfying also some additional conditions. We further set \begin{equation*} \tag{1.3} t_n(x' \mid x) = q_n(x, x')/p_n(x).\end{equation*} Next, by means of $p_n(x)$ and $q_n(x, x')$, define the random variables $F_n(x) = \int^x_{-\infty} p_n(z) dz, \quad G_n(z \mid x) = \int^z_{-\infty} t_n(dx' \mid x).$ We finally let $F(\cdot)$ and $G(\cdot \mid x), x \varepsilon R$, be the initial and transition distribution functions of the process. Under suitable conditions on the function $K$, the sequence $\{h\}$, and the process, the main results of this paper are the following. The distribution function $F_n(x)$, as an estimate of $F(x)$, obeys the Glivenko-Cantelli theorem. This is Theorem 3.1. Turning now to the estimate $G_n(\cdot | x)$ of $G(\cdot | x)$, we have been able to establish in Theorem 3.2 that $\sup \{|G_n(z | x) - G(z | x)a|; z \varepsilon R\}$ converges to zero, as $n \rightarrow \infty$, but in the probability sense. This is true for all $x \varepsilon R$. In Section 4 it is assumed that the $r$th absolute moment of $X_1$ exists (for some $r = 1, 2, \cdots$), and the problem is that of gaining further information about $G(\cdot \mid x)$, by estimating its $k$th moment, to be denoted by $m(k; x)$, for $k = 1, 2, \cdots, r$. By letting the rather simple expression $m_n(k; x) = (nh)^{-1}p_n^{-1}(x) \sum^n_{j=1} X^k_{j+1}K((x - X_j)h^{-1})$ stand for an estimate of $m(k; x)$, it is shown that, as $n \rightarrow \infty, m_n(k; x) \rightarrow m(k; x)$ in probability, for $k = 1, 2, \cdots, r$ and $x \varepsilon R$. This is the content of Theorem 4.1. Finally in Section 5 we look into the problem of estimating the quantiles of $G(\cdot \mid x)$, and in connection with this, two results are derived. For some $p$ in the interval (0, 1), it is assumed that the $p$th quantile, $\xi(p, x)$, of $G(\cdot \mid x)$ is unique. By defining $\xi_n(p, x)$ as the smallest root of the equation $G_n(z \mid x) = p$ and using it as an estimate of $\xi(p, x)$, it is proved that, as $n \rightarrow \infty$, $\xi_n(p, x) \rightarrow \xi(p, x)\quad \text{in probability (Theorem 5.1)},$ and $(nh)^{\frac{1}{2}} \lbrack \xi_n(p, x) - \xi(p, x) \rbrack \rightarrow N(0, \tau^2(\xi, x))\quad \text{in law}.$ This is Theorem 5.2, and the variance $\tau^2(\xi, x)$ is explicitly given in that theorem.

Citation

Download Citation

George G. Roussas. "Nonparametric Estimation of the Transition Distribution Function of a Markov Process." Ann. Math. Statist. 40 (4) 1386 - 1400, August, 1969. https://doi.org/10.1214/aoms/1177697510

Information

Published: August, 1969
First available in Project Euclid: 27 April 2007

zbMATH: 0188.50501
MathSciNet: MR246471
Digital Object Identifier: 10.1214/aoms/1177697510

Rights: Copyright © 1969 Institute of Mathematical Statistics

Vol.40 • No. 4 • August, 1969
Back to Top