Open Access
February 2008 The adjusted Viterbi training for hidden Markov models
Jüri Lember, Alexey Koloydenko
Bernoulli 14(1): 180-206 (February 2008). DOI: 10.3150/07-BEJ105

Abstract

The EM procedure is a principal tool for parameter estimation in the hidden Markov models. However, applications replace EM by Viterbi extraction, or training (VT). VT is computationally less intensive, more stable and has more of an intuitive appeal, but VT estimation is biased and does not satisfy the following fixed point property. Hypothetically, given an infinitely large sample and initialized to the true parameters, VT will generally move away from the initial values. We propose adjusted Viterbi training (VA), a new method to restore the fixed point property and thus alleviate the overall imprecision of the VT estimators, while preserving the computational advantages of the baseline VT algorithm. Simulations elsewhere have shown that VA appreciably improves the precision of estimation in both the special case of mixture models and more general HMMs. However, being entirely analytic, the VA correction relies on infinite Viterbi alignments and associated limiting probability distributions. While explicit in the mixture case, the existence of these limiting measures is not obvious for more general HMMs. This paper proves that under certain mild conditions, the required limiting distributions for general HMMs do exist.

Citation

Download Citation

Jüri Lember. Alexey Koloydenko. "The adjusted Viterbi training for hidden Markov models." Bernoulli 14 (1) 180 - 206, February 2008. https://doi.org/10.3150/07-BEJ105

Information

Published: February 2008
First available in Project Euclid: 8 February 2008

zbMATH: 1168.60033
MathSciNet: MR2401659
Digital Object Identifier: 10.3150/07-BEJ105

Keywords: Baum–Welch , bias , computational efficiency , consistency , EM , Hidden Markov models , maximum likelihood , Parameter estimation , Viterbi extraction , Viterbi training

Rights: Copyright © 2008 Bernoulli Society for Mathematical Statistics and Probability

Vol.14 • No. 1 • February 2008
Back to Top