- Volume 14, Number 1 (2008), 180-206.
The adjusted Viterbi training for hidden Markov models
The EM procedure is a principal tool for parameter estimation in the hidden Markov models. However, applications replace EM by Viterbi extraction, or training (VT). VT is computationally less intensive, more stable and has more of an intuitive appeal, but VT estimation is biased and does not satisfy the following fixed point property. Hypothetically, given an infinitely large sample and initialized to the true parameters, VT will generally move away from the initial values. We propose adjusted Viterbi training (VA), a new method to restore the fixed point property and thus alleviate the overall imprecision of the VT estimators, while preserving the computational advantages of the baseline VT algorithm. Simulations elsewhere have shown that VA appreciably improves the precision of estimation in both the special case of mixture models and more general HMMs. However, being entirely analytic, the VA correction relies on infinite Viterbi alignments and associated limiting probability distributions. While explicit in the mixture case, the existence of these limiting measures is not obvious for more general HMMs. This paper proves that under certain mild conditions, the required limiting distributions for general HMMs do exist.
Bernoulli, Volume 14, Number 1 (2008), 180-206.
First available in Project Euclid: 8 February 2008
Permanent link to this document
Digital Object Identifier
Mathematical Reviews number (MathSciNet)
Zentralblatt MATH identifier
Lember, Jüri; Koloydenko, Alexey. The adjusted Viterbi training for hidden Markov models. Bernoulli 14 (2008), no. 1, 180--206. doi:10.3150/07-BEJ105. https://projecteuclid.org/euclid.bj/1202492790