Abstract
We develop a theory for continuous-time non-Markovian stochastic control problems which are inherently time-inconsistent. Their distinguishing feature is that the classical Bellman optimality principle no longer holds. Our formulation is cast within the framework of a controlled non-Markovian forward stochastic differential equation, and a general objective functional setting. We adopt a game-theoretic approach to study such problems, meaning that we seek for subgame perfect Nash equilibrium points. As a first novelty of this work, we introduce and motivate a refinement of the definition of equilibrium that allows us to establish a direct and rigorous proof of an extended dynamic programming principle, in the same spirit as in the classical theory. This in turn allows us to introduce a system consisting of an infinite family of backward stochastic differential equations analogous to the classical HJB equation. We prove that this system is fundamental, in the sense that its well-posedness is both necessary and sufficient to characterise the value function and equilibria. As a final step, we provide an existence and uniqueness result. Some examples and extensions of our results are also presented.
Funding Statement
The authors gratefully acknowledge the support of the ANR project PACMAN ANR-16-CE05-0027.
Acknowledgments
We would like to thank an area editor and two anonymous reviewers for their careful reading and insightful comments. This work started while the authors were at Columbia University, whose support is kindly acknowledged.
Citation
Camilo Hernández. Dylan Possamaï. "Me, myself and I: A general theory of non-Markovian time-inconsistent stochastic control for sophisticated agents." Ann. Appl. Probab. 33 (2) 1396 - 1458, April 2023. https://doi.org/10.1214/22-AAP1845
Information