The Annals of Applied Probability

A mean-field stochastic control problem with partial observations

Rainer Buckdahn, Juan Li, and Jin Ma

Full-text: Access denied (no subscription detected)

We're sorry, but we are unable to provide you with the full text of this article because we are not able to identify you as a subscriber. If you have a personal subscription to this journal, then please login. If you are already logged in, then you may need to update your profile to register your subscription. Read more about accessing full-text


In this paper, we are interested in a new type of mean-field, non-Markovian stochastic control problems with partial observations. More precisely, we assume that the coefficients of the controlled dynamics depend not only on the paths of the state, but also on the conditional law of the state, given the observation to date. Our problem is strongly motivated by the recent study of the mean field games and the related McKean–Vlasov stochastic control problem, but with added aspects of path-dependence and partial observation. We shall first investigate the well-posedness of the state-observation dynamics, with combined reference probability measure arguments in nonlinear filtering theory and the Schauder fixed-point theorem. We then study the stochastic control problem with a partially observable system in which the conditional law appears nonlinearly in both the coefficients of the system and cost function. As a consequence, the control problem is intrinsically “time-inconsistent”, and we prove that the Pontryagin stochastic maximum principle holds in this case and characterize the adjoint equations, which turn out to be a new form of mean-field type BSDEs.

Article information

Ann. Appl. Probab., Volume 27, Number 5 (2017), 3201-3245.

Received: August 2015
Revised: January 2017
First available in Project Euclid: 3 November 2017

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 60H10: Stochastic ordinary differential equations [See also 34F05] 60H30: Applications of stochastic analysis (to PDE, etc.) 93E03: Stochastic systems, general 93E11: Filtering [See also 60G35] 93E20: Optimal stochastic control

Conditional mean-field SDEs non-Markovian stochastic control system nonlinear filtering stochastic maximum principle mean-field backward SDEs


Buckdahn, Rainer; Li, Juan; Ma, Jin. A mean-field stochastic control problem with partial observations. Ann. Appl. Probab. 27 (2017), no. 5, 3201--3245. doi:10.1214/17-AAP1280.

Export citation


  • [1] Bensoussan, A. (1992). Stochastic Control of Partially Observable Systems. Cambridge Univ. Press, Cambridge.
  • [2] Buckdahn, R., Djehiche, B. and Li, J. (2011). A general stochastic maximum principle for SDEs of mean-field type. Appl. Math. Optim. 64 197–216.
  • [3] Buckdahn, R., Djehiche, B., Li, J. and Peng, S. (2009). Mean-field backward stochastic differential equations: A limit approach. Ann. Probab. 37 1524–1565.
  • [4] Buckdahn, R., Li, J. and Peng, S. (2009). Mean-field backward stochastic differential equations and related partial differential equations. Stochastic Process. Appl. 119 3133–3154.
  • [5] Carmona, R. and Delarue, F. (2013). Probabilistic analysis of mean-field games. SIAM J. Control Optim. 51 2705–2734.
  • [6] Carmona, R. and Delarue, F. (2015). Forward-backward stochastic differential equations and controlled McKean–Vlasov dynamics. Ann. Probab. 43 2647–2700.
  • [7] Carmona, R., Delarue, F. and Lachapelle, A. (2013). Control of McKean–Vlasov dynamics versus mean field games. Math. Financ. Econ. 7 131–166.
  • [8] Carmona, R. and Zhu, X. (2016). A probabilistic approach to mean field games with major and minor players. Ann. Appl. Probab. 26 1535–1580.
  • [9] Carnoma, R. and Delarue, F. (2012). Optimal control of McKean-Vlasov stochastic dynamics. Technical Report.
  • [10] Cauty, R. (2001). Solution du problème de point fixe de Schauder. Fund. Math. 170 231–246.
  • [11] Cauty, R. (2012). Une généralisation de la conjecture de point fixe de Schauder. Preprint. Available at arXiv:1201.2586 [math.AT].
  • [12] Ethier, S. N. and Kurtz, T. G. (1986). Markov Processes: Characterization and Convergence. Wiley, New York.
  • [13] Huang, M., Malhamé, R. P. and Caines, P. E. (2006). Large population stochastic dynamic games: Closed-loop McKean–Vlasov systems and the Nash certainty equivalence principle. Commun. Inf. Syst. 6 221–251.
  • [14] Ikeda, N. and Watanabe, S. (1981). Stochastic Differential Equations and Diffusion Processes. North-Holland Mathematical Library 24. North-Holland, Amsterdam.
  • [15] Lasry, J.-M. and Lions, P.-L. (2007). Mean field games. Jpn. J. Math. 2 229–260.
  • [16] Li, J. (2012). Stochastic maximum principle in the mean-field controls. Automatica J. IFAC 48 366–373.
  • [17] Li, J. and Min, H. (2016). Weak solutions of mean-field stochastic differential equations and application to zero-sum stochastic differential games. SIAM J. Control Optim. 54 1826–1858.
  • [18] Stroock, D. W. and Varadhan, S. R. S. (1979). Multidimensional Diffusion Processes. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] 233. Springer, Berlin.
  • [19] Villani, C. (2003). Topics in Optimal Transportation. Graduate Studies in Mathematics 58. Amer. Math. Soc., Providence, RI.
  • [20] Yong, J. and Zhou, X. Y. (1999). Stochastic Controls: Hamiltonian Systems and HJB Equations. Applications of Mathematics (New York) 43. Springer, New York.
  • [21] Zeitouni, O. and Bobrovsky, B. Z. (1986). On the reference probability approach to the equations of nonlinear filtering. Stochastics 19 133–149.