The Annals of Probability

Martingale approach to stochastic differential games of control and stopping

Ioannis Karatzas and Ingrid-Mona Zamfirescu

Full-text: Open access

Abstract

We develop a martingale approach for studying continuous-time stochastic differential games of control and stopping, in a non-Markovian framework and with the control affecting only the drift term of the state-process. Under appropriate conditions, we show that the game has a value and construct a saddle pair of optimal control and stopping strategies. Crucial in this construction is a characterization of saddle pairs in terms of pathwise and martingale properties of suitable quantities.

Article information

Source
Ann. Probab., Volume 36, Number 4 (2008), 1495-1527.

Dates
First available in Project Euclid: 29 July 2008

Permanent link to this document
https://projecteuclid.org/euclid.aop/1217360977

Digital Object Identifier
doi:10.1214/07-AOP367

Mathematical Reviews number (MathSciNet)
MR2435857

Zentralblatt MATH identifier
1142.93040

Subjects
Primary: 93E20: Optimal stochastic control 60G40: Stopping times; optimal stopping problems; gambling theory [See also 62L15, 91A60] 91A15: Stochastic games
Secondary: 91A25: Dynamic games 60G44: Martingales with continuous parameter

Keywords
Stochastic games control optimal stopping martingales Doob–Meyer decompositions stochastic maximum principle thrifty control strategies

Citation

Karatzas, Ioannis; Zamfirescu, Ingrid-Mona. Martingale approach to stochastic differential games of control and stopping. Ann. Probab. 36 (2008), no. 4, 1495--1527. doi:10.1214/07-AOP367. https://projecteuclid.org/euclid.aop/1217360977


Export citation

References

  • Bayraktar, E. and Young, V. R. (2007). Minimizing probability of ruin and a game of stopping and control. Preprint, Dept. Mathematics, Univ. Michigan.
  • Beneš, V. E. (1970). Existence of optimal policies based on specified information, for a class of stochastic decision problems. SIAM J. Control Optim. 8 179–188.
  • Beneš, V. E. (1971). Existence of optimal stochastic control laws. SIAM J. Control Optim. 9 446–472.
  • Beneš, V. E. (1992). Some combined control and stopping problems. Paper presented at the CRM Workshop on Stochastic Systems, Montréal, November 1992.
  • Bensoussan, A. and Lions, J. L. (1982). Applications of Variational Inequalities in Stochastic Control. North-Holland, Amsterdam.
  • Bismut, J. M. (1973). Conjugate convex functions in optimal stochastic control. J. Math. Anal. Appl. 44 384–404.
  • Bismut, J. M. (1978). An introductory approach to duality in optimal stochastic control. SIAM Rev. 20 62–78.
  • Ceci, C. and Basan, B. (2004). Mixed optimal stopping and stochastic control problems with semicontinuous final reward, for diffusion processes. Stochastics Stochastics Rep. 76 323–337.
  • Cvitanić, J. and Karatzas, I. (1996). Backwards stochastic differential equations with reflection and Dynkin games. Ann. Probab. 24 2024–2056.
  • Davis, M. H. A. (1973). On the existence of optimal policies in stochastic control. SIAM J. Control Optim. 11 587–594.
  • Davis, M. H. A. (1979). Martingale methods in stochastic control. Lecture Notes in Control and Inform. Sci. 16 85–117. Springer, Berlin.
  • Davis, M. H. A. and Varaiya, P. P. (1973). Dynamic programming conditions for partially observable stochastic systems. SIAM J. Control Optim. 11 226–261.
  • Davis, M. H. A. and Zervos, M. (1994). A problem of singular stochastic control with discretionary stopping. Ann. Appl. Probab. 4 226–240.
  • Dubins, L. E. and Savage, L. J. (1976). Inequalities for Stochastic Processes (How to Gamble If You Must), corrected ed. Dover, New York.
  • Duncan, T. E. and Varaiya, P. P. (1971). On the solutions of a stochastic control system. SIAM J. Control Optim. 9 354–371.
  • El Karoui, N. (1981). Les aspects probabilistes du contrôle stochastique. Lecture Notes in Math. 876 73–238. Springer, Berlin.
  • El Karoui, N., Nguyen, D. H. and Jeanblanc-Picqué, M. (1987). Compactification methods in the control of degenerate diffusions: Existence of an optimal control. Stochastics 20 169–219.
  • Elliott, R. J. (1977). The optimal control of a stochastic system. SIAM J. Control Optim. 15 756–778.
  • Elliott, R. J. (1982). Stochastic Calculus and Applications. Springer, New York.
  • Fleming, W. R. and Soner, H. M. (2006). Controlled Markov Processes and Viscosity Solutions, 2nd ed. Springer, New York.
  • Fujisaki, M., Kallianpur, G. and Kunita, H. (1972). Stochastic differential equations for the non linear filtering problem. Osaka J. Math. 9 19–40.
  • Hamadène, S. (2006). Mixed zero-sum stochastic differential game and American game option. SIAM J. Control Optim. 45 496–518.
  • Hamadène, S. and Lepeltier, J. P. (1995). Backward equations, stochastic control, and zero-sum stochastic differential games. Stochastics Stochastics Rep. 54 221–231.
  • Hamadène, S. and Lepeltier, J. P. (2000). Reflected BSDEs and mixed game problem. Stochastic Process. Appl. 85 177–188.
  • Haussmann, U. G. (1986). A Stochastic Maximum Principle for Optimal Control of Diffusions. Longman Scientific and Technical, Harlow.
  • Haussmann, U. G. and Lepeltier, J. P. (1990). On the existence of optimal controls. SIAM J. Control Optim. 28 851–902.
  • Kamizono, K. and Morimoto, H. (2002). On a combined control and stopping time game. Stochastics Stochastics Rep. 73 99–123.
  • Karatzas, I. and Kou, S. G. (1998). Hedging American contingent claims with constrained portfolios. Finance Stochastics 3 215–258.
  • Karatzas, I. and Ocone, D. L. (2002). A leavable, bounded-velocity stochastic control problem. Stochastic Process. Appl. 99 31–51.
  • Karatzas, I., Ocone, D. L., Wang, H. and Zervos, M. (2000). Finite-fuel singular control with discretionary stopping. Stochastics Stochastics Rep. 71 1–50.
  • Karatzas, I. and Shreve, S. E. (1991). Brownian Motion and Stochastic Calculus, 2nd ed. Springer, New York.
  • Karatzas, I. and Shreve, S. E. (1998). Methods of Mathematical Finance. Springer, New York.
  • Karatzas, I. and Sudderth, W. D. (1999). Control and stopping of a diffusion process on an interval. Ann. Appl. Probab. 9 188–196.
  • Karatzas, I. and Sudderth, W. D. (2001). The controller and stopper game for a linear diffusion. Ann. Probab. 29 1111–1127.
  • Karatzas, I. and Sudderth, W. D. (2007). Stochastic games of control and stopping for a linear diffusion. In Random Walk, Sequential Analysis and Related Topics: A Festschrift in Honor of Y. S. Chow (A. Hsiung, Zh. Ying and C. H. Zhang, eds.) 100–117. World Scientific Publishers, Singapore, Hackensack and London.
  • Karatzas, I. and Wang, H. (2000). A barrier option of American type. Appl. Math. Optim. 42 259–280.
  • Karatzas, I. and Wang, H. (2001). Utility maximization with discretionary stopping. SIAM J. Control Optim. 39 306–329.
  • Karatzas, I. and Zamfirescu, M. (2006). Martingale approach to stochastic control with discretionary stopping. Appl. Math. Optim. 53 163–184.
  • Krylov, N. V. (1980). Controlled Diffusion Processes. Springer, New York.
  • Kushner, H. J. (1965). On the stochastic maximum principle: Fixed time of control. J. Math. Anal. Appl. 11 78–92.
  • Maitra, A. and Sudderth, W. D. (1996a). Discrete Gambling and Stochastic Games. Springer, New York.
  • Maitra, A. and Sudderth, W. D. (1996b). The gambler and the stopper. In Statistics, Probability and Game Theory: Papers in Honor of David Blackwell (T. S. Ferguson, L. S. Shapley and J. B. MacQueen, eds.). IMS Lecture Notes Monograph Series 30 191–208. IMS, Hayward, CA.
  • Morimoto, H. (2003). Variational inequalities for combined control and stopping. SIAM J. Control Optim. 42 686–708.
  • Neveu, J. (1975). Discrete-Parameter Martingales. North-Holland, Amsterdam.
  • Ocone, D. L. and Weerasinghe, A. P. (2006). A degenerate variance control problem with discretionary stopping. Preprint, Dept. Mathematics, Iowa State Univ.
  • Peng, S. (1990). A general stochastic maximum principle for optimal control problems. SIAM J. Control Optim. 28 966–979.
  • Peng, S. (1993). Backward stochastic differential equations and applications to optimal control. Appl. Math. Optim. 27 125–144.
  • Rishel, R. W. (1970). Necessary and sufficient dynamic programming conditions for continuous-time optimal control. SIAM J. Control Optim. 8 559–571.
  • Rogers, L. C. G. and Williams, D. (1987). Diffusions, Markov Processes and Martingales 2. Itô Calculus. Wiley, New York.
  • Weerasinghe, A. P. (2006). A controller and stopper game with degenerate variance control. Electron. Comm. Probab. 11 89–99.