Open Access
May, 1995 Impulse Control of Piecewise Deterministic Markov Processes
M. A. H. Dempster, J. J. Ye
Ann. Appl. Probab. 5(2): 399-423 (May, 1995). DOI: 10.1214/aoap/1177004771

Abstract

This paper concerns the optimal impulse control of piecewise deterministic Markov processes (PDPs). The PDP optimal (full) control problem with dynamic control plus impulse control is transformed to an equivalent dynamic control problem. The existence of an optimal full control and a generalized Bellman-Hamilton-Jacobi necessary and sufficient optimality condition for the PDP full control problem in terms of the value function for the new dynamic control problem are derived. It is shown that the value function of the original PDP optimal full control problem is Lipschitz continuous and satisfies a generalized quasivariational inequality with a boundary condition. A necessary and sufficient optimality condition is given in terms of the value function for the original full control problem.

Citation

Download Citation

M. A. H. Dempster. J. J. Ye. "Impulse Control of Piecewise Deterministic Markov Processes." Ann. Appl. Probab. 5 (2) 399 - 423, May, 1995. https://doi.org/10.1214/aoap/1177004771

Information

Published: May, 1995
First available in Project Euclid: 19 April 2007

zbMATH: 0843.93088
MathSciNet: MR1336876
Digital Object Identifier: 10.1214/aoap/1177004771

Subjects:
Primary: 60G40
Secondary: 49B60 , 93E20

Keywords: Bellman-Hamilton-Jacobi equation , impulse control , Piecewise deterministic Markov processes , quasivariational inequality

Rights: Copyright © 1995 Institute of Mathematical Statistics

Vol.5 • No. 2 • May, 1995
Back to Top