Viterbi algorithm

The Viterbi algorithm, named after its developer Andrew Viterbi, is a dynamic programming algorithm for finding the most likely sequence of hidden states – known as the Viterbi path – that result in a sequence of observed events, especially in the context of hidden Markov models. The forward algorithm is a closely related algorithm for computing the probability of a sequence of observed events.

The Viterbi algorithm was originally conceived as an error-correction scheme for noisy digital communication links, finding universal application in decoding the convolutional codes used in CDMA and GSM digital cellular, dial modems, satellite and deep-space communications, and 802.11 wireless LANs. It is now also commonly used in information theory, speech recognition, keyword spotting, computational linguistics, and bioinformatics. For example, in speech-to-text speech recognition, the acoustic signal is treated as the observed sequence of events, and a string of text is considered to be the "hidden cause" of the acoustic signal. The Viterbi algorithm finds the most likely string of text given the acoustic signal.

The algorithm is not general; it makes a number of assumptions. First, both the observed events and hidden events must be in a sequence. This sequence often corresponds to time. Second, these two sequences need to be aligned, and an observed event needs to correspond to exactly one hidden event. Third, computing the most likely hidden sequence up to a certain point t must only depend on the observed event at point t, and the most likely sequence at point t − 1. These assumptions are all satisfied in a first-order hidden Markov model.

The terms "Viterbi path" and "Viterbi algorithm" are also applied to related dynamic programming algorithms that discover the single most likely explanation for an observation. For example, in stochastic parsing a dynamic programming algorithm can be used to discover the single most likely context-free derivation (parse) of a string, which is sometimes called the "Viterbi parse".

Contents

A concrete example

Template:HMM example

You talk to your friend three days in a row and discover that on the first day he went for a walk, on the second day he went shopping, and on the third day he cleaned his apartment. You have two questions: What is the overall probability of this sequence of observations? And what is the most likely sequence of rainy/sunny days that would explain these observations? The first question is answered by the forward algorithm; the second by the Viterbi algorithm. These two algorithm are structurally so similar (in fact, they are both instances of the same abstract algorithm) that they can be implemented in a single function:

def forward_viterbi(y, X, sp, tp, ep):
   T = {}
   for state in X:
       ##          prob.      V. path  V. prob.
       T[state] = (sp[state], [state], sp[state])
   for output in y:
       U = {}
       for next_state in X:
           total = 0
           argmax = None
           valmax = 0
           for state in X:
               (prob, v_path, v_prob) = T[state]
               p = ep[state][output] * tp[state][next_state]
               prob *= p
               v_prob *= p
               total += prob
               if v_prob > valmax:
                   argmax = v_path + [next_state]
                   valmax = v_prob
           U[next_state] = (total, argmax, valmax)
       T = U
   ## apply sum/max to the final states:
   total = 0
   argmax = None
   valmax = 0
   for state in X:
       (prob, v_path, v_prob) = T[state]
       total += prob
       if v_prob > valmax:
           argmax = v_path
           valmax = v_prob
   return (total, argmax, valmax)

The function forward_viterbi takes the following arguments: y is the sequence of observations, e.g. ['walk', 'shop', 'clean']; X is the set of hidden states; sp is the start probability; tp are the transition probabilities; and ep are the emission probabilities.

The algorithm works on the mappings T and U. Each is a mapping from a state to a triple (prob, v_path, v_prob), where prob is the total probability of all paths from the start to the current state, v_path is the Viterbi path up to the current state, and v_prob is the probability of the Viterbi path up to the current state. The mapping T holds this information for a given point t in time, and the main loop constructs U, which holds similar information for time t+1. Because of the Markov property, information about any point in time prior to t is not needed.

The algorithm begins by initializing T to the start probabilities: the total probability for a state is just the start probability of that state; and the Viterbi path to a start state is the singleton path consisting only of that state; the probability of the Viterbi path is the same as the start probability.

The main loop considers the observations from y in sequence. Its loop invariant is that T contains the correct information up to but excluding the point in time of the current observation. The algorithm then computes the triple (prob, v_path, v_prob) for each possible next state. The total probability of a given next state, total is obtained by adding up the probabilities of all paths reaching that state. More precisely, the algorithm iterates over all possible source states. For each source state, T holds the total probability of all paths to that state. This probability is then multiplied by the emission probability of the current observation and the transition probability from the source state to the next state. The resulting probability prob is then added to total. The probability of the Viterbi path is computed in a similar fashion, but instead of adding across all paths one performs a discrete maximization. Initially the maximum value valmax is zero. For each source state, the probability of the Viterbi path to that state is known. This too is multiplied with the emission and transition probabilities and replaces valmax if it is greater than its current value. The Viterbi path itself is computed as the corresponding argmax of that maximization, by extending the Viterbi path that leads to the current state with the next state. The triple (prob, v_path, v_prob) computed in this fashion is stored in U and once U has been computed for all possible next states, it replaces T, thus ensuring that the loop invariant holds at the end of the iteration.

In the end another summation/maximization is performed (this could also be done inside the main loop by adding a pseudo-observation after the last real observation).

In the running example, the forward/Viterbi algorithm is used as follows:

def example():
   return forward_viterbi(['walk', 'shop', 'clean'],
                          states,
                          start_probability,
                          transition_probability,
                          emission_probability)

This reveals that the total probability of ['walk', 'shop', 'clean'] is 0.033612 and that the Viterbi path is ['Sunny', 'Rainy', 'Rainy', 'Rainy']. The Viterbi path contains four states because the third observation was generated by the third state and a transition to the fourth state. In other words, given the observed activities, it was most likely sunny when your friend went for a walk and then it started to rain the next day and kept on raining.

Extensions

With the algorithm called Iterative Viterbi Decoding one can find the subsequence of an observation that matches best (on average) to a given HMM. Iterative Viterbi Decoding, developed by M.C.Silaghi (1998) works by iterating the call to a modified Viterbi algorithm, reestimating the score for a filler until convergence.

An alternate algorithm, called the Lazy Viterbi algorithm, has been recently proposed. This works by not expanding any nodes until it really needs to, and usually manages to get away with doing a lot less work (in software) than the ordinary Viterbi algorithm for the same result - however, it is not so easy to parallelize in hardware.

References

  • Andrew J. Viterbi. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE Transactions on Information Theory 13(2):260–267, April 1967. (The Viterbi decoding algorithm is described in section IV.)
  • G. D. Forney. The Viterbi algorithm. Proceedings of the IEEE 61(3):268–278, March 1973.
  • L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE 77(2):257–286, February 1989. (Describes the forward algorithm and Viterbi algorithm for HMMs).
  • J Feldman, I Abou-Faycal and M Frigo. A Fast Maximum-Likelihood Decoder for Convolutional Codes.

External links

de:Viterbi-Algorithmus

Navigation

  • Art and Cultures
    • Art (https://academickids.com/encyclopedia/index.php/Art)
    • Architecture (https://academickids.com/encyclopedia/index.php/Architecture)
    • Cultures (https://www.academickids.com/encyclopedia/index.php/Cultures)
    • Music (https://www.academickids.com/encyclopedia/index.php/Music)
    • Musical Instruments (http://academickids.com/encyclopedia/index.php/List_of_musical_instruments)
  • Biographies (http://www.academickids.com/encyclopedia/index.php/Biographies)
  • Clipart (http://www.academickids.com/encyclopedia/index.php/Clipart)
  • Geography (http://www.academickids.com/encyclopedia/index.php/Geography)
    • Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries)
    • Maps (http://www.academickids.com/encyclopedia/index.php/Maps)
    • Flags (http://www.academickids.com/encyclopedia/index.php/Flags)
    • Continents (http://www.academickids.com/encyclopedia/index.php/Continents)
  • History (http://www.academickids.com/encyclopedia/index.php/History)
    • Ancient Civilizations (http://www.academickids.com/encyclopedia/index.php/Ancient_Civilizations)
    • Industrial Revolution (http://www.academickids.com/encyclopedia/index.php/Industrial_Revolution)
    • Middle Ages (http://www.academickids.com/encyclopedia/index.php/Middle_Ages)
    • Prehistory (http://www.academickids.com/encyclopedia/index.php/Prehistory)
    • Renaissance (http://www.academickids.com/encyclopedia/index.php/Renaissance)
    • Timelines (http://www.academickids.com/encyclopedia/index.php/Timelines)
    • United States (http://www.academickids.com/encyclopedia/index.php/United_States)
    • Wars (http://www.academickids.com/encyclopedia/index.php/Wars)
    • World History (http://www.academickids.com/encyclopedia/index.php/History_of_the_world)
  • Human Body (http://www.academickids.com/encyclopedia/index.php/Human_Body)
  • Mathematics (http://www.academickids.com/encyclopedia/index.php/Mathematics)
  • Reference (http://www.academickids.com/encyclopedia/index.php/Reference)
  • Science (http://www.academickids.com/encyclopedia/index.php/Science)
    • Animals (http://www.academickids.com/encyclopedia/index.php/Animals)
    • Aviation (http://www.academickids.com/encyclopedia/index.php/Aviation)
    • Dinosaurs (http://www.academickids.com/encyclopedia/index.php/Dinosaurs)
    • Earth (http://www.academickids.com/encyclopedia/index.php/Earth)
    • Inventions (http://www.academickids.com/encyclopedia/index.php/Inventions)
    • Physical Science (http://www.academickids.com/encyclopedia/index.php/Physical_Science)
    • Plants (http://www.academickids.com/encyclopedia/index.php/Plants)
    • Scientists (http://www.academickids.com/encyclopedia/index.php/Scientists)
  • Social Studies (http://www.academickids.com/encyclopedia/index.php/Social_Studies)
    • Anthropology (http://www.academickids.com/encyclopedia/index.php/Anthropology)
    • Economics (http://www.academickids.com/encyclopedia/index.php/Economics)
    • Government (http://www.academickids.com/encyclopedia/index.php/Government)
    • Religion (http://www.academickids.com/encyclopedia/index.php/Religion)
    • Holidays (http://www.academickids.com/encyclopedia/index.php/Holidays)
  • Space and Astronomy
    • Solar System (http://www.academickids.com/encyclopedia/index.php/Solar_System)
    • Planets (http://www.academickids.com/encyclopedia/index.php/Planets)
  • Sports (http://www.academickids.com/encyclopedia/index.php/Sports)
  • Timelines (http://www.academickids.com/encyclopedia/index.php/Timelines)
  • Weather (http://www.academickids.com/encyclopedia/index.php/Weather)
  • US States (http://www.academickids.com/encyclopedia/index.php/US_States)

Information

  • Home Page (http://academickids.com/encyclopedia/index.php)
  • Contact Us (http://www.academickids.com/encyclopedia/index.php/Contactus)

  • Clip Art (http://classroomclipart.com)
Toolbox
Personal tools