# Metropolis-Hastings algorithm

Missing image
Metropolis_hastings_algorithm.png
The Sampling distribution Q determines the next point to move to in the random walk.

In mathematics and physics, the Metropolis-Hastings algorithm is an algorithm used to generate a sequence of samples from the joint distribution of two or more variables. The purpose of such a sequence is to approximate the joint distribution (as with a histogram), or to compute an integral (such as an expected value). This algorithm is an example of a Markov chain Monte Carlo algorithm. It is a generalization of the Metropolis algorithm suggested by Hastings (citation below). The Gibbs sampling algorithm is a special case of the Metropolis-Hastings algorithm.

The Metropolis-Hastings algorithm can draw samples from any probability distribution [itex]p(x)[itex], requiring only that the density can be calculated at [itex]x[itex]. The algorithm generates a set of states [itex]x^t[itex] which is a Markov chain because each state [itex]x^t[itex] depends only on the previous state [itex]x^{t-1}[itex]. The algorithm depends on the creation of a proposal density [itex]Q(x'; x^t)[itex], which depends on the current state [itex]x^t[itex] and which can generate a new proposed sample [itex]x'[itex]. For example, the proposal density could be a Gaussian function centred on the current state [itex]x^t[itex]

[itex]

Q( x'; x^t ) \sim N( x^t, \sigma^2 I) [itex]

reading [itex]Q(x'; x^t)[itex] as the probability of generating [itex]x'[itex] given the previous value [itex]x^t[itex].

This proposal density would generate samples centred around the current state with variance [itex]\sigma^2 I[itex]. So we draw a new proposal state [itex]x'[itex] with probability [itex]Q(x'; x^t)[itex] and then calculate a value

[itex]

a = a_1 a_2\, [itex]

where

[itex]

a_1 = \frac{P(x')}{P(x^t)} [itex]

is the likelihood ratio between the proposed sample [itex]x'[itex] and the previous sample [itex]x^t[itex], and

[itex]

a_2 = \frac{Q( x^t; x' )}{Q(x';x^t)} [itex]

is the ratio of the proposal density in two directions (from [itex]x^t[itex] to [itex]x'[itex] and vice versa). This is equal to 1 if the proposal density is symmetric. Then the new state [itex]x^{t+1}[itex] is chosen with the rule

[itex]

x^{t+1} = \left \{

\begin{matrix}
x'                            & \mbox{if }a > 1 \\
x'\mbox{ with probability }a, & \mbox{if }a < 1
\end{matrix}


\right. [itex]

The Markov chain is started from a random initial value [itex]x^0[itex] and the algorithm is run for a few thousand iterations so that this initial state is "forgotten". These samples, which are discarded, are known as burn-in. The algorithm works best if the proposal density matches the shape of the target distribution [itex]p(x)[itex], that is [itex]Q(x'; x^t) \approx p(x')[itex], but in most cases this is unknown. If a Gaussian proposal is used the variance parameter [itex]\sigma^2[itex] has to be tuned during the burn-in period. This is usually done by calculating the acceptance rate, which is the fraction of proposed samples that is accepted in a window of the last [itex]N[itex] samples. This is usually set to be around 60%. If the proposal steps are too small the chain will mix slowly (i.e., it will move around the space slowly and converge slowly to [itex]p(x)[itex]). If the proposal steps are too large the acceptance rate will be very low because the proposals are likely to land in regions of much lower probability density so [itex]a_1[itex] will be very small.

## References

• Chib, Siddhartha and Edward Greenberg: "Understanding the Metropolis–Hastings Algorithm". American Statistician, 49, 327–335, 1995
• W.K. Hastings. "Monte Carlo Sampling Methods Using Markov Chains and Their Applications", Biometrika, 57:97-109, 1970.
• N. Metropolis, A.W. Rosenbluth, M.N. Rosenbluth, A.H. Teller, and E. Teller. "Equations of State Calculations by Fast Computing Machines". Journal of Chemical Physics, 21:1087-1091, 1953.

• Art and Cultures
• Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries)
• Space and Astronomy