The Annals of Applied Probability

Backward SDEs for optimal control of partially observed path-dependent stochastic systems: A control randomization approach

Elena Bandini, Andrea Cosso, Marco Fuhrman, and Huyên Pham

Full-text: Access denied (no subscription detected)

We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text


We introduce a suitable backward stochastic differential equation (BSDE) to represent the value of an optimal control problem with partial observation for a controlled stochastic equation driven by Brownian motion. Our model is general enough to include cases with latent factors in mathematical finance. By a standard reformulation based on the reference probability method, it also includes the classical model where the observation process is affected by a Brownian motion (even in presence of a correlated noise), a case where a BSDE representation of the value was not available so far. This approach based on BSDEs allows for greater generality beyond the Markovian case, in particular our model may include path-dependence in the coefficients (both with respect to the state and the control), and does not require any nondegeneracy condition on the controlled equation.

We use a randomization method, previously adopted only for cases of full observation, and consisting in a first step, in replacing the control by an exogenous process independent of the driving noise and in formulating an auxiliary (“randomized”) control problem where optimization is performed over changes of equivalent probability measures affecting the characteristics of the exogenous process. Our first main result is to prove the equivalence between the original partially observed control problem and the randomized problem. In a second step, we prove that the latter can be associated by duality to a BSDE, which then characterizes the value of the original problem as well.

Article information

Ann. Appl. Probab., Volume 28, Number 3 (2018), 1634-1678.

Received: September 2016
Revised: July 2017
First available in Project Euclid: 1 June 2018

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 60H10: Stochastic ordinary differential equations [See also 34F05]
Secondary: 93E20: Optimal stochastic control

Stochastic optimal control with partial observation backward SDEs randomization of controls path-dependent controlled SDEs


Bandini, Elena; Cosso, Andrea; Fuhrman, Marco; Pham, Huyên. Backward SDEs for optimal control of partially observed path-dependent stochastic systems: A control randomization approach. Ann. Appl. Probab. 28 (2018), no. 3, 1634--1678. doi:10.1214/17-AAP1340.

Export citation


  • [1] Bain, A. and Crisan, D. (2009). Fundamentals of Stochastic Filtering. Stochastic Modelling and Applied Probability 60. Springer, New York.
  • [2] Bandini, E. (2017). Optimal control of piecewise deterministic Markov processes: A BSDE representation of the value function. ESAIM Control Optim. Calc. Var.. To appear. DOI:10.1051/cocv/2017006.
  • [3] Bandini, E., Cosso, A., Fuhrman, M. and Pham, H. (2016). Randomization method and backward SDEs for optimal control of partially observed path-dependent stochastic systems. Preprint. Available at arXiv:1511.09274v2.
  • [4] Bandini, E., Cosso, A., Fuhrman, M. and Pham, H. (2016). Randomized filtering and Bellman equation in Wasserstein space for partial observation control problem. Preprint. Available at arXiv:1609.02697.
  • [5] Bandini, E. and Fuhrman, M. (2017). Constrained BSDEs representation of the value function in optimal control of pure jump Markov processes. Stochastic Process. Appl. 127 1441–1474.
  • [6] Bensoussan, A. (1992). Stochastic Control of Partially Observable Systems. Cambridge Univ. Press, Cambridge.
  • [7] Bertsekas, D. P. and Shreve, S. E. (1978). Stochastic Optimal Control: The Discrete Time Case. Mathematics in Science and Engineering 139. Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers], New York–London.
  • [8] Bouchard, B. (2009). A stochastic target formulation for optimal switching problems in finite horizon. Stochastics 81 171–197.
  • [9] Choukroun, S. and Cosso, A. (2016). Backward SDE representation for stochastic control problems with nondominated controlled intensity. Ann. Appl. Probab. 26 1208–1259.
  • [10] Choukroun, S., Cosso, A. and Pham, H. (2015). Reflected BSDEs with nonpositive jumps, and controller-and-stopper games. Stochastic Process. Appl. 125 597–633.
  • [11] Cosso, A., Fuhrman, M. and Pham, H. (2016). Long time asymptotics for fully nonlinear Bellman equations: A backward SDE approach. Stochastic Process. Appl. 126 1932–1973.
  • [12] Elie, R. and Kharroubi, I. (2014). Adding constraints to BSDEs with jumps: An alternative to multidimensional reflections. ESAIM Probab. Stat. 18 233–250.
  • [13] El Karoui, N., Hu̇ù Nguyen, D. and Jeanblanc-Picqué, M. (1988). Existence of an optimal Markovian filter for the control under partial observations. SIAM J. Control Optim. 26 1025–1061.
  • [14] Fabbri, G., Gozzi, F. and Świȩch, A. (2017). Stochastic Optimal Control in Infinite Dimensions: Dynamic Programming and HJB Equations. Springer, Berlin. With Chapter 6 by, Fuhrman, M. and Tessitore, G.
  • [15] Fuhrman, M. and Pham, H. (2015). Randomized and backward SDE representation for optimal control of non-Markovian SDEs. Ann. Appl. Probab. 25 2134–2167.
  • [16] Fuhrman, M., Pham, H. and Zeni, F. (2016). Representation of non-Markovian optimal stopping problems by constrained BSDEs with a single jump. Electron. Commun. Probab. 21 Paper No. 3, 7.
  • [17] Fuhrman, M. and Tessitore, G. (2004). Existence of optimal stochastic controls and global solutions of forward–backward stochastic differential equations. SIAM J. Control Optim. 43 813–830.
  • [18] Gozzi, F. and Świȩch, A. (2000). Hamilton–Jacobi–Bellman equations for the optimal control of the Duncan–Mortensen–Zakai equation. J. Funct. Anal. 172 466–510.
  • [19] Jacod, J. (1974/75). Multivariate point processes: Predictable projection, Radon–Nikodým derivatives, representation of martingales. Z. Wahrsch. Verw. Gebiete 31 235–253.
  • [20] Jacod, J. (1979). Calcul Stochastique et Problèmes de Martingales. Lecture Notes in Math. 714. Springer, Berlin.
  • [21] Kharroubi, I., Langrené, N. and Pham, H. (2014). A numerical algorithm for fully nonlinear HJB equations: An approach by control randomization. Monte Carlo Methods Appl. 20 145–165.
  • [22] Kharroubi, I., Langrené, N. and Pham, H. (2015). Discrete time approximation of fully nonlinear HJB equations via BSDEs with nonpositive jumps. Ann. Appl. Probab. 25 2301–2338.
  • [23] Kharroubi, I., Ma, J., Pham, H. and Zhang, J. (2010). Backward SDEs with constrained jumps and quasi-variational inequalities. Ann. Probab. 38 794–840.
  • [24] Kharroubi, I. and Pham, H. (2015). Feynman–Kac representation for Hamilton–Jacobi–Bellman IPDE. Ann. Probab. 43 1823–1865.
  • [25] Krylov, N. V. (2009). Controlled Diffusion Processes. Stochastic Modelling and Applied Probability 14. Springer, Berlin.
  • [26] Lions, P.-L. (1989). Viscosity solutions of fully nonlinear second order equations and optimal stochastic control in infinite dimensions. II. Optimal control of Zakai’s equation. In Stochastic Partial Differential Equations and Applications. Lecture Notes in Math. 1390 147–170. Springer, Berlin.
  • [27] Pardoux, É. (1982). Equations of nonlinear filtering and application to stochastic control with partial observation. In Nonlinear Filtering and Stochastic Control. Lect. Notes in Math. 972 208–248. Springer, Berlin.
  • [28] Pardoux, É. and Peng, S. (1992). Backward stochastic differential equations and quasilinear parabolic partial differential equations. In Stochastic Partial Differential Equations and Their Applications (Charlotte, NC, 1991). Lect. Notes Control Inf. Sci. 176 200–217. Springer, Berlin.
  • [29] Peng, S. (2007). $G$-expectation, $G$-Brownian motion and related stochastic calculus of Itô type. In Stochastic Analysis and Applications. Abel Symp. 2 541–567. Springer, Berlin.
  • [30] Rogers, L. C. G. and Williams, D. (2000). Diffusions, Markov Processes, and Martingales. Vol. 2. Itô Calculus. Cambridge Mathematical Library. Cambridge Univ. Press, Cambridge.
  • [31] Soner, H. M., Touzi, N. and Zhang, J. (2012). Wellposedness of second order backward SDEs. Probab. Theory Related Fields 153 149–190.
  • [32] Tang, S. (1998). The maximum principle for partially observed optimal control of stochastic differential equations. SIAM J. Control Optim. 36 1596–1617.
  • [33] Tang, S. and Li, X. (1994). Necessary conditions for optimal control of stochastic systems with random jumps. SIAM J. Control Optim. 32 1447–1475.
  • [34] Zabczyk, J. (1996). Chance and Decision: Stochastic Control in Discrete Time. Scuola Normale Superiore, Pisa.