5
$\begingroup$

I am learning HMM recently and got confused with the training problem (training model parameters and hidden state given outcome sequence).

As far as I know, both Viterbi learning and Baum-Welch (forward-backward algorithm ) are used to estimate model parameters and hidden state in an EM fashion. the forward-backward algorithm is used in E step to estimate transition / emmision probability. What about estimaing hidden state seuquence in E step? is it estimated using forward-backward algorithm as well or using Viterbi algorithm?

Really appreciate if anyone can share some insight.

$\endgroup$

1 Answer 1

2
$\begingroup$

The HMM parameters are estimated using a forward-backward algorithm also called the Baum-Welch algorithm.

The Viterbi algorithm is used to get the most likely states sequnce for a given observation sequence.

Therefore, the two algorithms you mentioned are used to solve different problems. Classically there are 3 problems for HMMs:

  1. Given an observation sequence and a model, finding the likelihood of the sequence with respect to the model. This problem is solved using a forward algorithm.
  2. Given an observation sequence and a model, find the optimal state sequence of the model for this observation sequence. This problem is solved with the Viterbi algorithm.
  3. Adjust the model parameters to maximal the likelihood so some observation sequence. This is basically training a model from initial values. This problem is solved using the Baum-Welch algorithm.

I suggest the following reading: An introduction to Hidden Markov Models by L.R. Rabiner and B.H. Juang (1986) which can be found here: http://ai.stanford.edu/~pabbeel/depth_qual/Rabiner_Juang_hmms.pdf

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.