Open Access
September, 1959 Estimating the Parameters of a Differential Process
Herman Rubin, Howard G. Tucker
Ann. Math. Statist. 30(3): 641-658 (September, 1959). DOI: 10.1214/aoms/1177706195

Abstract

Let $X$ denote a differential process, i.e., a stochastic process with independent increments for which the distribution of $X(t + h) - X(t)$ depends only on $h$. The parameter $t$ runs through the interval $\lbrack 0, + \infty)$, and the usual initial condition $P\lbrack X(0) = 0\rbrack = 1$ is assumed. Then it is known that the distribution of $X(t)$ is infinitely divisible, i.e., the logarithm of its characteristic function can be written as \begin{equation*}\tag{1.1} \log f_{x(t)} (u) = i\gamma tu + t \int^{+\infty}_{-\infty} \bigg(e^{iux} - 1 - \frac{iux}{1 + x^2}\bigg) \frac{1 + x^2}{x^2} dG(x).\end{equation*} In this canonical representation, $\gamma$ is a real constant, and $G$ is a bounded, nondecreasing function, it being permissible always to consider $G(- \infty) = 0$. In the usual probabilistic terminology, the probability law of $X(t)$ is a convolution of a normal law and a (possibly infinite) number of Poisson laws, or a limit of such laws. The function $G$ is called the jump function; its saltus at $x = 0, \sigma^2 = G(+0) - G(-0)$, is the variance of the normal component, and its set of points of increase for $x \neq 0$ gives information as to the nature of the Poisson components, viz., the "relative density" of the magnitudes of the discontinuities of the sample function. The purpose of this paper is to derive estimates for this jump function $G$ and for the "trend term", $\gamma$. Two estimates of $G$ are obtained, and one estimate is obtained for $\gamma$. In the first method of estimating $G$, considered in $\S 2$, it is assumed that the experimenter can observe a sample function of $X$ at any finite number of values of $t$ that he chooses. Accordingly, for any integer $n$, let $$X_{nk} = X \big(\frac{k}{n}\big) - X \big(\frac{k - 1}{n}\big).$$ Then the estimate $G^{\ast}_{N,n}(u)$ of $G(u)$ is defined as \begin{equation*}\tag{1.2}G^{\ast}_{N,n}(u) = \frac{1}{N} \sum^{nN}_{k = 1} \frac{X^{2_{nk}}}{1 + X^{2_{n^k}}} I_{\lbrack X_{nk \leqq u\rbrack}},\end{equation*} where \begin{equation*}I_{\lbrack X_{nk \leqq u}\rbrack} = \begin{cases}\begin{align*} & 1 & \text{if} & X_{nk} \leqq u \\ & 0 & \text{if} & X_{nk} > u,\end{align*}\end{cases}\end{equation*} and $N = \lbrack Tn\rbrack, T$ being the largest value of $t$ observed. It is proved that this estimate is strongly consistent in the following sense: \begin{equation*}\tag{1.3}P\{\lim_{n \rightarrow \infty} \lim_{N \rightarrow \infty} G^{\ast}_{N,n}(u) = G(u) \text{for all} u \varepsilon C(G)\} = 1,\end{equation*} where $C(G)$ denotes the set of all values of $u$ at which $G$ is continuous. This estimate is not necessarily an unbiased estimate of $G(u)$ for all $u \varepsilon C(G)$. The second method for estimating $G$, developed in $\S 3$, requires that the experimenter be able to observe all the discontinuities of a sample function of $X$ on a finite interval in addition to being able to observe $X(t)$ at any finite set of values of $t$. Not only is a consistent estimate obtained for $G$, but also a consistent estimate is obtained for the variance of the normal component, $\sigma^2 = G(+0) - G(-0)$. Let $\{k_n\}$ denote a sequence of positive integers such that \begin{equation*}\tag{1.4}\Sigma_nk^{-\frac{1}{3}}_n < \infty.\end{equation*} Further, let $T_0$ be a fixed value of time $T$, let \begin{equation*}\tag{1.5}S^2_n = \sum^{k_n}_{k = 1} \{X(kT_0/k_n) - X((k - 1)T_0/k_n)\}^2,\end{equation*} and \begin{equation*}\tag{1.6}D^2 = \text{the sum of the squares of the jumps of the sample function during the time interval} \lbrack 0, T_0\rbrack.\end{equation*} Finally, let \begin{equation*}\tag{1.7}J_T(x) = \int^x_{-\infty} \frac{b^2}{1 + b^2} dN_T(b),\end{equation*} where, for every Borel set $B, N_T(B)$ denotes the number of discontinuities observed for $X$ during $\lbrack 0, T\rbrack$ whose magnitudes lie in $B$. The estimate $\hat G_{n, T}(u)$ of $G(u)$ is then constructed as follows: \begin{equation*}\tag{1.8}\hat G_{n, T}(u) = \begin{cases}T^{-1} J_T(u) \text{if} u < 0 \\ T^{-1} J_T(u) + T_0^{-1} (S^2_n - D^2) \text{if} u > 0.\end{cases}\end{equation*} The estimate $\hat G_{n, T}(u)$ is an unbiased estimated of $G(u)$ if $\sigma^2 = 0$ or if $u < 0$, but in any case, $\hat G_{n, T}(b) - \hat G_{n, T}(a)$ is an unbiased estimate of $G(b) - G(a)$, provided $0 \not\in \lbrack a, b\rbrack$. Also, this estimate is consistent in the following sense: \begin{equation*}\tag{1.9}\operatornamewithlimits{\lim}_{\substack{T \rightarrow \infty\\n \rightarrow \infty}} \hat G_{n, T}(u) = G(u)\end{equation*} with probability one for every $u$. In addition, $(1/T_0) \lbrack S^2_n - D^2\rbrack$ is a consistent estimate of $\sigma^2 = G(+0) - G(-0)$, the variance of the normal component, in the sense that $\lim_{n \rightarrow \infty} (1/T_0) \lbrack S^2_n - D^2\rbrack = \sigma^2$ with probability 1. In $\S 4$ a comparison is made of the two estimates for $G(x)$ obtained in $\S\S 2$ and 3. It is found that both estimates do agree in a special limiting case. In particular it is proved that \begin{equation*}\tag{1.10}p \lim_{n \rightarrow \infty} G^{\ast}_{N, r}(u) = \lim_{n \rightarrow \infty} \hat G_{n, N}(u)\quad \text{for all} u \varepsilon C(\hat G_{n, N}).\end{equation*} Finally in $\S 5$ a consistent and unbiased estimate is derived from the "trend term", $\gamma$. This estimate is \begin{equation*}\tag{1.11}\hat \gamma(t) = \frac{1}{t} \big\{X(t) - \int^{+\infty}_{-\infty} \frac{b^3}{1 + b^2} dN_t(b)\big\}.\end{equation*} It is consistent in the sense that \begin{equation*}\tag{1.12}P\{\lim_{t\rightarrow\infty} \hat \gamma(t) = \gamma\} = 1.\end{equation*} Another way of writing this estimate is as follows. Let $J_1, J_2, \cdots, J_n, \cdots$ denote the discontinuities of the sample curve up to time $t$ (not necessarily in order); then $$\gamma(t) = \frac{1}{t}\big\{X(t) - \sum_n \frac{J^3_n}{1 + J^2_n}\big\}$$

Citation

Download Citation

Herman Rubin. Howard G. Tucker. "Estimating the Parameters of a Differential Process." Ann. Math. Statist. 30 (3) 641 - 658, September, 1959. https://doi.org/10.1214/aoms/1177706195

Information

Published: September, 1959
First available in Project Euclid: 27 April 2007

zbMATH: 0092.36604
MathSciNet: MR110174
Digital Object Identifier: 10.1214/aoms/1177706195

Rights: Copyright © 1959 Institute of Mathematical Statistics

Vol.30 • No. 3 • September, 1959
Back to Top