Markov process
|
In probability theory, a Markov process is a stochastic process characterized as follows: The state <math>c_k<math> at time <math>k<math> is one of a finite number in the range <math>\{1,\ldots,M\}<math>. Under the assumption that the process runs only from time 0 to time n and that the initial and final states are known, the state sequence is then represented by a finite vector <math>C=(c_0,...,c_n)<math>.
Let <math>P(c_k | c_0,c_1 ,...,c_{(k-1)})<math> denote the probability (chance of occurrence) of the state ck at time k conditioned on all states up to time k − 1. The process is a first-order Markov process, in the sense that the probability of being in state ck at time k, given all states up to time k − 1 depends only on the previous state, i.e. ck−1 at time k − 1:
- <math>P(c_k|c_0,c_1,\ldots,c_{k-1})=P(c_k|c_{k-1}).\,<math>
For an nth-order Markov process,
- <math>P(c_k|c_0,c_1,\ldots,c_{k-1})=P(c_k|c_{k-n},\ldots,c_k).\,<math>
In general, for the Viterbi algorithm, the underlying process is assumed to be a Markov process with the following characteristics:
- finite-state, this means that the number M is finite.
- discrete-time, this means that going from one state to other takes the same unit of time.
- observed in memoryless noise, this means that the sequence of observations depends probabilistically only on the previous sequence transitions.