Abstract
Let ξ0,ξ1,…,ξω−1 be observations from the hidden Markov model with probability distribution Pθ0, and let ξω,ξω+1,… be observations from the hidden Markov model with probability distribution Pθ1. The parameters θ0 and θ1 are given, while the change point ω is unknown. The problem is to raise an alarm as soon as possible after the distribution changes from Pθ0 to Pθ1, but to avoid false alarms. Specifically, we seek a stopping rule N which allows us to observe the ξ's sequentially, such that E∞N is large, and subject to this constraint, sup kEk(N−k|N≥k) is as small as possible. Here Ek denotes expectation under the change point k, and E∞ denotes expectation under the hypothesis of no change whatever.
In this paper we investigate the performance of the Shiryayev–Roberts–Pollak (SRP) rule for change point detection in the dynamic system of hidden Markov models. By making use of Markov chain representation for the likelihood function, the structure of asymptotically minimax policy and of the Bayes rule, and sequential hypothesis testing theory for Markov random walks, we show that the SRP procedure is asymptotically minimax in the sense of Pollak [Ann. Statist. 13 (1985) 206–227]. Next, we present a second-order asymptotic approximation for the expected stopping time of such a stopping scheme when ω=1. Motivated by the sequential analysis in hidden Markov models, a nonlinear renewal theory for Markov random walks is also given.
Citation
Cheng-Der Fuh. "Asymptotic operating characteristics of an optimal change point detection in hidden Markov models." Ann. Statist. 32 (5) 2305 - 2339, October 2004. https://doi.org/10.1214/009053604000000580
Information