Conjugate prior
|
In Bayesian probability theory, a conjugate prior is a prior distribution which has the property that the posterior distribution is the same type of distribution.
Consider the general problem of inferring a distribution for a parameter θ given some datum or data x. From Bayes' theorem, the posterior distribution is calculated from the prior p(θ) and the likelihood function p(x|θ) as
- <math> p(\theta|x) = \frac{p(x|\theta) \, p(\theta)}
{\int p(x|\theta) \, p(\theta) \, d\theta}. <math>
Let the likelihood function be considered fixed; the likelihood function is usually well-determined from a statement of the data-generating process. It is clear that different choices of the prior distribution p(θ) may make the integral more or less difficult to calculate, and the product p(x|θ) × p(θ) may take one algebraic form or another. For certain choices of the prior, the posterior has the same algebraic form as the prior (generally with different parameters). Such a choice is a conjugate prior.
A conjugate prior is an algebraic convenience: otherwise a difficult numerical integration may be necessary.
Conjugate priors are known for several problems. See Gelman, et al., for a catalog.
All members of the exponential family have conjugate priors.
Example
For a random variable which is Bernoulli trial with unknown probability of success q in [0,1], the usual conjugate prior is the beta distribution with
- <math>p(q=x) = {x^{a-1}(1-x)^{b-1} \over \Beta(a,b)}<math>
where a and b are chosen to reflect any existing belief or information (a=1 and b=1 would give a uniform distribution) and Β(a,b) is the Beta function acting as a normalising constant.
If we then sample this random variable and get s successes and f failures, we have
- <math>P(s,f|q=x) = {s+f \choose s} x^s(1-x)^f, <math>
- <math>p(q=x|s,f) = {{{s+f \choose s} x^{s+a-1}(1-x)^{f+b-1} / \Beta(a,b)} \over \int_{y=0}^1 \left({s+f \choose s} y^{s+a-1}(1-y)^{f+b-1} / \Beta(a,b)\right) dy} = {x^{s+a-1}(1-x)^{f+b-1} \over \Beta(s+a,f+b)} , <math>
which is another beta distribution with a simple change to the parameters. This posterior distribution could then be used as the prior for more samples, with the parameters simply adding each extra piece of information as it comes.
References
- Andrew Gelman, John B. Carlin, Hal S. Stern, and Donald B. Rubin. Bayesian Data Analysis, 2nd edition. CRC Press, 2003. ISBN 158488388X