The Annals of Probability

The Controller-and-Stopper Game for a Linear Diffusion

Ioannis Karatzas and William D. Sudderth

Full-text: Open access


Consider a process $X(\cdot) = {X(t), 0 \leq t < \infty}$ with values in the interval $I = (0, 1)$, absorption at the boundary points of $I$, and dynamics

$$dX(t) = \beta(t)dt + \sigma(t)dW(t),\quad X(0) = x.$$

The values $(\beta(t), \sigma(t))$ are selected by a controller from a subset of $\Re \times (0, \infty)$ that depends on the current position $X(t)$, for every $t \geq 0$. At any stopping rule $\tau$ of his choice, a second player, called a stopper, can halt the evolution of the process $X(\cdot)$, upon which he receives from the controller the amount $e^{-\alpha\tau}u(X(\tau))$; here $\alpha \epsilon [0, \infty)$ is a discount factor, and $u: [0, 1] \to \Re$ is a continuous “reward function.” Under appropriate conditions on this function and on the controller’s set of choices, it is shown that the two players have a saddlepoint of “optimal strategies.” These can be described fairly explicitly by reduction to a suitable problem of optimal stopping, whose maximal expected reward V coincides with the value of the game,

$$V = \sup_{\tau} \inf_{X(\cdot)} \mathbf{E}[e^{-\alpha\tau}u(X(\tau))] = \inf_{X(\cdot)} \sup_{\tau} \mathbf{E}[e^{-\alpha\tau}u(X(\tau))].$$

Article information

Ann. Probab., Volume 29, Number 3 (2001), 1111-1127.

First available in Project Euclid: 5 March 2002

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 93E20: Optimal stochastic control 60G40: Stopping times; optimal stopping problems; gambling theory [See also 62L15, 91A60]
Secondary: 62L15: Optimal stopping [See also 60G40, 91A60] 60D60

Stochastic game optimal stopping one-dimensional diffusions generalized Itô rule local time excessive functions


Karatzas, Ioannis; Sudderth, William D. The Controller-and-Stopper Game for a Linear Diffusion. Ann. Probab. 29 (2001), no. 3, 1111--1127. doi:10.1214/aop/1015345598.

Export citation


  • Dynkin, E. B. (1969). The space of exits of a Markov process. Russian Math. Surveys 24 89-157.
  • Dynkin, E. B. and Yushkevich, A. A. (1969). Markov Processes: Theorems and Problems. Plenum Press, New York.
  • Karatzas, I. and Shreve, S. E. (1988). Brownian Motion and Stochastic Calculus. Springer, New York.
  • Karatzas, I. and Sudderth, W. D. (1999). Control and stopping of a diffusion process on an interval. Ann. Appl. Probab. 9 188-196.
  • Maitra, A. P. and Sudderth, W. D. (1996). The gambler and the stopper. In Statistics, Probability and Game Theory: Papers in Honor of David Blackwell (T. S. Ferguson, L. S. Shapley and J. B. MacQueen, eds.) 191-208. Springer, Berlin.
  • Pestien, V. C. and Sudderth, W. D. (1985). Continuous-time red-and-black: how to control a diffusion to a goal. Math. Oper. Res. 10 599-611.
  • Revuz, D. and Yor, M. (1991). Continuous Martingales and Brownian Motion. Springer, New York.
  • Salminen, P. (1985). Optimal stopping of one-dimensional diffusions. Math. Nachr. 124 85-101.
  • Shiryaev, A. N. (1978). Optimal Stopping Rules. Springer, New York.
  • Sudderth, W. D. and Weerasinghe, A. (1989). Controlling a process to a goal in finite time. Math. Oper. Res. 14 400-409.