Posterior probability
|
In Bayesian probability theory, the posterior probability of an uncertain proposition is its conditional probability given empirical data into account. Compare with prior probability, which may be assessed in the absence of empirical data, or which may incorporate pre-existing data and information.
The posterior probability can be calculated by Bayes' theorem from the prior probability and the likelihood function.
Similarly a posterior probability distribution is the conditional probability distribution of the uncertain quantity given the data. It can be calculated by multiplying the prior probability distribution by the likelihood function, and then dividing by the normalizing constant. For example
- <math>f_{X\mid Y=y}(x)={f_X(x) L_{X\mid Y=y}(x) \over {\int_{-\infty}^\infty f_X(x) L_{X\mid Y=y}(x)\,dx}}<math>
gives the posterior probability density function for a random variable X given the data Y=y, where
- <math>f_X(x)<math> is the prior density of X,
- <math>L_{X\mid Y=y}(x) = f_{Y\mid X=x}(y)<math> is the likelihood function as a function of x,
- <math>\int_{-\infty}^\infty f_X(x) L_{X\mid Y=y}(x)\,dx<math> is the normalizing constant, and
- <math>f_{X\mid Y=y}(x)<math> is the posterior density of X.