The Annals of Applied Probability

A quickest detection problem with an observation cost

Robert C. Dalang and Albert N. Shiryaev

Full-text: Open access

Abstract

In the classical quickest detection problem, one must detect as quickly as possible when a Brownian motion without drift “changes” into a Brownian motion with positive drift. The change occurs at an unknown “disorder” time with exponential distribution. There is a penalty for declaring too early that the change has occurred, and a cost for late detection proportional to the time between occurrence of the change and the time when the change is declared. Here, we consider the case where there is also a cost for observing the process. This stochastic control problem can be formulated using either the notion of strong solution or of weak solution of the s.d.e. that defines the observation process. We show that the value function is the same in both cases, even though no optimal strategy exists in the strong formulation. We determine the optimal strategy in the weak formulation and show, using a form of the “principle of smooth fit” and under natural hypotheses on the parameters of the problem, that the optimal strategy takes the form of a two-threshold policy: observe only when the posterior probability that the change has already occurred, given the observations, is larger than a threshold $A\geq0$, and declare that the disorder time has occurred when this posterior probability exceeds a threshold $B\geq A$. The constants $A$ and $B$ are determined explicitly from the parameters of the problem.

Article information

Source
Ann. Appl. Probab., Volume 25, Number 3 (2015), 1475-1512.

Dates
First available in Project Euclid: 23 March 2015

Permanent link to this document
https://projecteuclid.org/euclid.aoap/1427124134

Digital Object Identifier
doi:10.1214/14-AAP1028

Mathematical Reviews number (MathSciNet)
MR3325279

Zentralblatt MATH identifier
1326.60052

Subjects
Primary: 60G35: Signal detection and filtering [See also 62M20, 93E10, 93E11, 94Axx]
Secondary: 60G40: Stopping times; optimal stopping problems; gambling theory [See also 62L15, 91A60] 93E20: Optimal stochastic control 94A13: Detection theory

Keywords
Quickest detection stochastic control disorder problem free boundary problem

Citation

Dalang, Robert C.; Shiryaev, Albert N. A quickest detection problem with an observation cost. Ann. Appl. Probab. 25 (2015), no. 3, 1475--1512. doi:10.1214/14-AAP1028. https://projecteuclid.org/euclid.aoap/1427124134


Export citation

References

  • [1] Balmer, D. W. (1975). On a quickest detection problem with costly information. J. Appl. Probab. 12 87–97.
  • [2] Balmer, D. W. (1976). On a quickest detection problem with variable monitoring. J. Appl. Probab. 13 760–767.
  • [3] Banerjee, T. and Veeravalli, V. V. (2012). Data-efficient quickest change detection with on-off observation control. Sequential Anal. 31 40–77.
  • [4] Banerjee, T. and Veeravalli, V. V. (2013). Data-efficient quickest change detection in minimax settings. IEEE Trans. Inform. Theory 59 6917–931.
  • [5] Bather, J. A. (1973). An optimal stopping problem with costly information. Bull. Inst. Internat. Statist. 45 9–24.
  • [6] Bayraktar, E. and Kravitz, R. (2012). Quickest detection with discretely controlled observations. Preprint. Available at arXiv:1212.4717v2.
  • [7] Cairoli, R. and Dalang, R. C. (1996). Sequential Stochastic Optimization. Wiley, New York.
  • [8] Chitashvili, R. (1997). On the nonexistence of a strong solution in the boundary problem for a sticky Brownian motion. Proc. A. Razmadze Math. Inst. 115 17–31.
  • [9] Dayanik, S. (2010). Wiener disorder problem with observations at fixed discrete time epochs. Math. Oper. Res. 35 756–785.
  • [10] Durrett, R. (1996). Probability: Theory and Examples, 2nd ed. Duxbury Press, Belmont, CA.
  • [11] El Karoui, N. (1981). Les aspects probabilistes du contrôle stochastique. In Ninth Saint Flour Probability Summer School—1979 (Saint Flour, 1979). Lecture Notes in Math. 876 73–238. Springer, Berlin.
  • [12] Gīhman, Ĭ. Ī. and Skorohod, A. V. (1972). Stochastic Differential Equations. Springer, Berlin.
  • [13] Ikeda, N. and Watanabe, S. (1981). Stochastic Differential Equations and Diffusion Processes. North-Holland Mathematical Library 24. North-Holland, Amsterdam.
  • [14] Karatzas, I., Shiryaev, A. N. and Shkolnikov, M. (2011). On the one-sided Tanaka equation with drift. Electron. Commun. Probab. 16 664–677.
  • [15] Karatzas, I. and Shreve, S. E. (1988). Brownian Motion and Stochastic Calculus. Graduate Texts in Mathematics 113. Springer, New York.
  • [16] Lamberton, D. and Lapeyre, B. (1996). Introduction to Stochastic Calculus Applied to Finance. Chapman & Hall, London.
  • [17] Øksendal, B. (2003). Stochastic Differential Equations: An Introduction with Applications, 6th ed. Springer, Berlin.
  • [18] Øksendal, B. and Sulem, A. (2007). Applied Stochastic Control of Jump Diffusions, 2nd ed. Springer, Berlin.
  • [19] Peskir, G. and Shiryaev, A. (2006). Optimal Stopping and Free-Boundary Problems. Birkhäuser, Basel.
  • [20] Peskir, G. and Shiryaev, A. N. (2000). Sequential testing problems for Poisson processes. Ann. Statist. 28 837–859.
  • [21] Protter, P. E. (2004). Stochastic Integration and Differential Equations, 2nd ed. Applications of Mathematics (New York) 21. Springer, Berlin.
  • [22] Revuz, D. and Yor, M. (1991). Continuous Martingales and Brownian Motion. Grundlehren der Mathematischen Wissenschaften 293. Springer, Berlin.
  • [23] Shiryayev, A. N. (1978). Optimal Stopping Rules. Springer, Berlin.
  • [24] Warren, J. (1999). On the joining of sticky Brownian motion. In Séminaire de Probabilités, XXXIII. Lecture Notes in Math. 1709 257–266. Springer, Berlin.