The Annals of Mathematical Statistics

Optimal Stopping in a Markov Process

Howard M. Taylor

Full-text: Open access

Abstract

Let $X = (X_t, \mathbf{F}_t, P^x)_{t\geqq0}$ be a Markov process where $(X_t; t \geqq 0)$ is the trajectory or sample path, $\mathbf{F}_t$ is the definitive $\sigma$-algebra of events generated by $(X_s; 0 \leqq s \leqq t)$, and $P^x$ is the probability distribution on sample paths corresponding to an initial state $x$. The state space is taken as the semi-compact $(E, \mathbf{C})$ where $E$ is a locally compact separable metric space with family of open sets $\mathbf{C}$. A non-negative extended real valued random variable $T$ such that for each $t \geqq 0, \{T \leqq t\} \varepsilon \mathbf{F}_t$ is called a Markov time or stopping time. This paper studies the problem of choosing a stopping time $T$ which, for a fixed $\lambda \geqq 0$, maximizes one of the following criteria:\begin{equation*}\tag{1} \Theta_T(x) = E^xe^{-\lambda T} g(X_T);\end{equation*}\begin{equation*}\tag{2} \lambda_T(x) = E^x\lbrack e^{-\lambda T} g(X_T) - \int^T_o e^{-\lambda s} c(X_s) ds\rbrack, \text{where} E^xT < \infty;\text{or}\end{equation*}\begin{equation*}\tag{3} \Phi_T(x) = E^x\lbrack g(X_T) - \int^T_0 c(X_s) ds\rbrack/E^xT, \text{where} 0 < E^xT < \infty;\end{equation*} where $g$ and $c$ are non-negative continuous functions defined on the state space of the process. Dynkin [9] studied criterion (1) where $\lambda = 0$ under the general assumption that $X$ is a standard process with a possibly random lifetime and under very weak continuity assumptions concerning the return function $g$. He showed that criterion (2) can often be transformed into criterion (1), and thus his approach is applicable in this case as well. This paper studies optimal stopping in a Markov process having a Feller transition function, a special case in Dynkin's development. We further specialize to exponentially distributed lifetimes which causes the appearance of a discount factor $e^{-\lambda t}$, with the natural interpretation that a dollar transaction $t$ time units hence has a present value of $e^{-\lambda t}$. Criterion (3) often has the meaning of a long-run time average return and a means of transforming this criterion into criterion (2) is given. Finally, some techniques for implementing Dynkin's approach in a variety of commonly occurring situations are given along with examples of their use.

Article information

Source
Ann. Math. Statist., Volume 39, Number 4 (1968), 1333-1344.

Dates
First available in Project Euclid: 27 April 2007

Permanent link to this document
https://projecteuclid.org/euclid.aoms/1177698259

Digital Object Identifier
doi:10.1214/aoms/1177698259

Mathematical Reviews number (MathSciNet)
MR232444

Zentralblatt MATH identifier
0177.45702

JSTOR
links.jstor.org

Citation

Taylor, Howard M. Optimal Stopping in a Markov Process. Ann. Math. Statist. 39 (1968), no. 4, 1333--1344. doi:10.1214/aoms/1177698259. https://projecteuclid.org/euclid.aoms/1177698259


Export citation