Normal distribution

Template:Probability distribution The normal distribution, also called Gaussian distribution, is an extremely important probability distribution in many fields, especially in physics and engineering. It is a family of distributions of the same general form, differing in their location and scale parameters: the mean ("average") and standard deviation ("variability"), respectively. The standard normal distribution is the normal distribution with a mean of zero and a standard deviation of one (the green curves in the plots to the right). It is often called the bell curve because the graph of its probability density resembles a bell.

Contents

Overview

The normal distribution is a convenient model of quantitative phenomena in the natural and behavioral sciences. A variety of psychological test scores and physical phenomena like photon counts have been found to approximately follow a normal distribution. While the underlying causes of these phenomena are often unknown, the use of the normal distribution can be theoretically justified in situations where many small effects are added together into a score or variable that can be observed. The normal distribution also arises in many areas of statistics: for example, the sampling distribution of the mean is approximately normal, even if the distribution of the population the sample is taken from is not normal. In addition, the normal distribution maximizes information entropy among all distributions with known mean and variance, which makes it the natural choice of underlying distribution for data summarized in terms of sample mean and variance. The normal distribution is the most widely used family of distributions in statistics and many statistical tests are based on the assumption of normality. In probability theory, normal distributions arise as the limiting distributions of several continuous and discrete families of distributions.

History

The normal distribution was first introduced by de Moivre in an article in 1733 (reprinted in the second edition of his The Doctrine of Chances, 1738) in the context of approximating certain binomial distributions for large n. His result was extended by Laplace in his book Analytical Theory of Probabilities (1812), and is now called the Theorem of de Moivre-Laplace.

Laplace used the normal distribution in the analysis of errors of experiments. The important method of least squares was introduced by Legendre in 1805. Gauss, who claimed to have used the method since 1794, justified it rigorously in 1809 by assuming a normal distribution of the errors.

The name "bell curve" goes back to Jouffret who used the term "bell surface" in 1872 for a bivariate normal with independent components. The name "normal distribution" was coined independently by Charles S. Peirce, Francis Galton and Wilhelm Lexis around 1875. This terminology is unfortunate, since it reflects and encourages the fallacy that many or all probability distributions are "normal". (See the discussion of "occurrence" below.)

That the distribution is called the normal or Gaussian distribution is an instance of Stigler's law of eponymy: "No scientific discovery is named after its original discoverer."

Specification of the normal distribution

There are various ways to specify a random variable. The most visual is the probability density function (plot at the top), which represents how likely each value of the random variable is. The cumulative density function is a conceptually cleaner way to specify the same information, but to the untrained eye its plot is much less informative (see below). Equivalent ways to specify the normal distribution are: the moments, the cumulants, the characteristic function, the moment-generating function, and the cumulant-generating function. Some of these are very useful for theoretical work, but not intuitive. See probability distribution for a discussion.

All of the cumulants of the normal distribution are zero, except the first two.

Probability density function

Missing image
Normal_distribution_pdf.png
Probability density function for 4 different parameter sets (green line is the standard normal)

The probability density function of the normal distribution with mean <math>\mu<math> and variance <math>\sigma^2<math> (equivalently, standard deviation <math>\sigma<math>) is an example of a Gaussian function,

<math>

f(x) = \frac{1}{\sigma\sqrt{2\pi}} \, \exp \left( -\frac{(x- \mu)^2}{2\sigma^2} \right).<math>

(See also exponential function and pi.) If a random variable <math>X<math> has this distribution, we write <math>X<math> ~ <math>N(\mu, \sigma^2)<math>. If <math>\mu = 0<math> and <math>\sigma = 1<math>, the distribution is called the standard normal distribution and the probability density function reduces to

<math>f(x) = \frac{1}{\sqrt{2\pi}} \, \exp\left(-\frac{x^2}{2} \right).<math>

The image to the right gives the graph of the probability density function of the normal distribution various parameter values.

Some notable qualities of the normal distribution:

  • The density function is symmetric about its mean value.
  • The mean is also its mode and median.
  • 68.27% of the area under the curve is within one standard deviation of the mean.
  • 95.45% of the area is within two standard deviations.
  • 99.73% of the area is within three standard deviations.
  • The inflection points of the curve occur at one standard deviation away from the mean.

Cumulative distribution function

Missing image
Normal_distribution_cdf.png
Cumulative distribution function of the above pdf

The cumulative distribution function (cdf) is defined as the probability that a variable <math>X<math> has a value less than or equal to <math>x<math>, and it is expressed in terms of the density function as

<math>

F(x) = \frac{1}{\sigma\sqrt{2\pi}} \int_{-\infty}^x

\exp
 -\frac{(u - \mu)^2}{2\sigma^2}

\, du . <math>

The standard normal cdf, conventionally denoted <math>\Phi<math>, is just the general cdf evaluated with <math>\mu=0<math> and <math>\sigma=1<math>,

<math>

\Phi(z) = \frac{1}{\sqrt{2\pi}} \int_{-\infty}^z

\exp
\left(
 -\frac{x^2}{2}
\right)

\, dx . <math>

The standard normal cdf can be expressed in terms of a special function called the error function, as

<math>

\Phi(z) = \frac{1}{2} \left[ 1 + \operatorname{erf} \left( \frac{z}{\sqrt{2}} \right) \right] . <math>

The inverse cumulative distribution function, or quantile function, can be expressed in terms of the inverse error function:

<math>

\Phi^{-1}(p) = \sqrt2 \; \operatorname{erf}^{-1} \left(2p - 1 \right) . <math>

Generating functions

Moment generating function

The moment generating function is defined as the expected value of <math>\exp(tX)<math>. For a normal distribution, it can be shown that the moment generating function is

<math>M_X(t)<math> <math>=

\mathrm{E} \left[

\exp(tX)

\right] <math>

  <math>=

\int_{-\infty}^{\infty}

\frac
 {1}
 {\sigma \sqrt{2\pi} }
 \exp \left( -\frac{(x - \mu)^2}{2 \sigma^2} \right)
 \exp (tx)

\, dx <math>

  <math>=

\exp \left(

\mu t + \sigma^2 \frac{t^2}{2}

\right) <math>

as can be seen by completing the square in the exponent.

Characteristic function

The characteristic function is defined as the expected value of <math>\exp (i t X)<math>, where <math>i<math> is the imaginary unit and <math>i = \sqrt{-1}<math>. For a normal distribution, the characteristic function is

<math>\phi_X(t)\!<math> <math>=

\mathrm{E} \left[

\exp(i t X)

\right] <math>

  <math>=

\int_{-\infty}^{\infty}

\frac{1}{\sigma \sqrt{2\pi}}
\exp
\left(- \frac{(x - \mu)^2}{2\sigma^2}
\right)
\exp(i t x)

\, dx <math>

  <math>=

\exp \left(

i \mu t - \frac{\sigma^2 t^2}{2}

\right) . <math>

The characteristic function is obtained by replacing <math>t<math> with <math>i t<math> in the moment-generating function.

Properties

Some of the properties of the normal distribution:

  1. If <math>X \sim N(\mu, \sigma^2)<math> and <math>a<math> and <math>b<math> are real numbers, then <math>a X + b \sim N(a \mu + b, (a \sigma)^2)<math> (see expected value and variance).
  2. If <math>X \sim N(\mu_X, \sigma^2_X)<math> and <math>Y \sim N(\mu_Y, \sigma^2_Y)<math> are independent normal random variables, then:
    • Their sum is normally distributed with <math>U = X + Y \sim N(\mu_X + \mu_Y, \sigma^2_X + \sigma^2_Y)<math>.
    • Their difference is normally distributed with <math>V = X - Y \sim N(\mu_X - \mu_Y, \sigma^2_X + \sigma^2_Y)<math>.
    • Both <math>U<math> and <math>V<math> are independent of each other.
  3. If <math>X \sim N(0, \sigma^2_X)<math> and <math>Y \sim N(0, \sigma^2_Y)<math> are independent normal random variables, then:
    • Their product <math>X Y<math> follows a distribution with density <math>p<math> given by
      <math>p(z) = \frac{1}{\pi\,\sigma_X\,\sigma_Y} \; K_0\left(\frac{|z|}{\sigma_X\,\sigma_Y}\right),<math> where <math>K_0<math> is a modified Bessel function.
    • Their ratio follows a Cauchy distribution with <math>X/Y \sim \mathrm{Cauchy}(0, \sigma_X/\sigma_Y)<math>.
  4. If <math>X_1, \cdots, X_n<math> are independent standard normal variables, then <math>X_1^2 + \cdots + X_n^2<math> has a chi-squared distribution with n degrees of freedom.

Standardizing normal random variables

As a consequence of Property 1, it is possible to relate all normal random variables to the standard normal.

If <math>X<math> ~ <math>N(\mu, \sigma^2)<math>, then

<math>Z = \frac{X - \mu}{\sigma} \!<math>

is a standard normal random variable: <math>Z<math> ~ <math>N(0,1)<math>. An important consequence is that the cdf of a general normal distribution is therefore

<math>\Pr(X \le x)

= \Phi \left(

\frac{x-\mu}{\sigma}

\right) = \frac{1}{2} \left(

1 + \operatorname{erf}
\left(
 \frac{x-\mu}{\sigma\sqrt{2}}
\right)

\right) . <math>

Conversely, if <math>Z<math> ~ <math>N(0,1)<math>, then

<math>X = \sigma Z + \mu<math>

is a normal random variable with mean <math>\mu<math> and variance <math>\sigma^2<math>.

The standard normal distribution has been tabulated, and the other normal distributions are simple transformations of the standard one. Therefore, one can use tabulated values of the cdf of the standard normal distribution to find values of the cdf of a general normal distribution.

Moments

Some of the first few moments of the normal distribution are:

Number Raw moment Central moment Cumulant
0 1 0
1 <math>\mu<math> 0 <math>\mu<math>
2 <math>\mu^2 + \sigma^2<math> <math>\sigma^2<math> <math>\sigma^2<math>
3 <math>\mu^3 + 3\mu\sigma^2<math> 0 0
4 <math>\mu^4 + 6 \mu^2 \sigma^2 + 3 \sigma^4<math> <math>3 \sigma^4<math> 0

All of cumulants of the normal distribution beyond the second cumulant are zero.

Generating normal random variables

For computer simulations, it is often useful to generate values that have a normal distribution. There are several methods and the most basic is to invert the standard normal cdf. More efficient methods are also known, one such method being the Box-Muller transform.

The Box-Muller transform takes two uniformly distributed values as input and maps them to two normally distributed values. This requires generating values from a uniform distribution, for which many methods are known. See also random number generators.

The Box-Muller transform is a consequence of the fact that the chi-square distribution with two degrees of freedom (see property 4 above) is an easily-generated exponential random variable.

The central limit theorem

Missing image
Normal_approximation_to_binomial.png
Plot of the pdf of a normal distribution with μ = 12 and σ = 3, approximating the pmf of a binomial distribution with n = 48 and p = 1/4

The normal distribution has the very important property that under certain conditions, the distribution of a sum of a large number of independent variables is approximately normal. This is the central limit theorem.

The practical importance of the central limit theorem is that the normal distribution can be used as an approximation to some other distributions.

  • A binomial distribution with parameters <math>n<math> and <math>p<math> is approximately normal for large <math>n<math> and <math>p<math> not too close to 1 or 0 (some books recommend using this approximation only if <math>n p<math> and <math>n(1 - p)<math> are both at least 5; in this case, a continuity correction should be applied).

The approximating normal distribution has mean <math>\mu = n p<math> and variance <math>\sigma^2 = n p (1 - p)<math>.

  • A Poisson distribution with parameter <math>\lambda<math> is approximately normal for large <math>\lambda<math>.

The approximating normal distribution has mean <math>\mu = \lambda<math> and variance <math>\sigma^2 = \lambda<math>.

Whether these approximations are sufficiently accurate depends on the purpose for which they are needed, and the rate of convergence to the normal distribution. It is typically the case that such approximations are less accurate in the tails of the distribution.

Infinite divisibility

The normal distributions are infinitely divisible probability distributions.

Standard deviation

Missing image
Standard_deviation_diagram.png
Dark blue is less than one standard deviation from the mean. For the normal distribution, this accounts for 68% of the set while two standard deviations from the mean (blue and brown) account for 95% and three standard deviations (blue, brown and green) account for 99.7%.

In practice, one often assumes that data are from an approximately normally distributed population. If that assumption is justified, then about 68% of the values are at within 1 standard deviation away from the mean, about 95% of the values are within two standard deviations and about 99.7% lie within 3 standard deviations. This is known as the "68-95-99.7 rule".

Related distributions

  • <math>R \sim \mathrm{Rayleigh}(\sigma^2)<math> is a Rayleigh distribution if <math>R = \sqrt{X^2 + Y^2}<math> where <math>X \sim N(0, \sigma^2)<math> and <math>Y \sim N(0, \sigma^2)<math> are two independent normal distributions.
  • <math>Y \sim \chi_{\nu}^2<math> is a chi-square distribution with <math>\nu<math> degrees of freedom if <math>Y = \sum_{k=1}^{\nu} X_k^2<math> where <math>X_k \sim N(0,1)<math> for <math>k=0,1,\cdots,\nu<math> and are independent
  • <math>Y \sim \mathrm{Cauchy}(\mu = 0, \theta = 1)<math> is a Cauchy distribution if <math>Y = X_1/X_2<math> for <math>X_1 \sim N(0,1)<math> and <math>X_2 \sim N(0,1)<math> are two independent normal distributions.
  • <math>Y \sim \mbox{Log-N}(\mu, \sigma^2)<math> is a log-normal distribution if <math>Y = \exp(X)<math> and <math>X \sim N(\mu, \sigma^2)<math>.

Occurrence

Approximately normal distributions occur in many situations, as a result of the central limit theorem. When there is reason to suspect the presence of a large number of small effects acting additively and independently, it is reasonable to assume that observations will be normal. There are statistical methods to empirically test that assumption, for example the Kolmogorov-Smirnov test.

Effects can also act as multiplicative (rather than additive) modifications. In that case, the assumption of normality is not justified, and it is the logarithm of the variable of interest that is normally distributed. The distribution of the directly observed variable is then called log-normal.

Finally, if there is a single external influence which has a large effect on the variable under consideration, the assumption of normality is not justified either. This is true even if, when the external variable is held constant, the resulting marginal distributions are indeed normal. The full distribution will be a superposition of normal variables, which is not in general normal. This is related to the theory of errors (see below).

To summarize, here's a list of situations where approximate normality is sometimes assumed. For a fuller discussion, see below.

  • In counting problems (so the central limit theorem includes a discrete-to-continuum approximation) where reproductive random variables are involved, such as
    • Binomial random variables, associated to yes/no questions;
    • Poisson random variables, associated to rare events;
  • In physiological measurements of biological specimens:
    • The logarithm of measures of size of living tissue (length, height, skin area, weight);
    • The length of inert appendages (hair, claws, nails, teeth) of biological specimens, in the direction of growth; presumably the thickness of tree bark also falls under this category;
    • Other physiological measures may be normally distributed, but there is no reason to expect that a priori;
  • Measurement errors are assumed to be normally distributed, and any deviation from normality must be explained;
  • Financial variables
    • The logarithm of interest rates, exchange rates, and inflation; these variables behave like compound interest, not like simple interest, and so are multiplicative;
    • Stock-market indices are supposed to be multiplicative too, but some researchers claim that they are Levy-distributed variables instead of lognormal;
    • Other financial variables may be normally distributed, but there is no reason to expect that a priori;
  • Light intensity
    • The intensity of laser light is normally distributed;
    • Thermal light has a Bose-Einstein distribution on very short time scales, and a normal distribution on longer timescales due to the central limit theorem.

Of relevance to biology and economics is the fact that complex systems tend to display power laws rather than normality.

Photon counting

Light intensity from a single source varies with time, as thermal fluctuations can be observed if the light is analyzed at sufficiently high time resolution. The intensity is usually assumed to be normally distributed. In the classical theory of optical coherence, light is modelled as an electromagnetic wave,and correlations are observed and analyzed up to the second order, consistently with the assumption of normality. (See Gaussian stochastic process)

However, non-classical correlations are sometimes observed. Quantum mechanics interprets measurements of light intensity as photon counting. The natural assumption in this setting is the Poisson distribution. When light intensity is integrated over times longer than the coherence time and is large, the Poisson-to-normal limit is appropriate. Correlations are interpreted in terms of "bunching" and "anti-bunching" of photons with respect to the expected Poisson behaviour. Anti-bunching requires a quantum model of light emission.

Ordinary light sources producing light by thermal emission display a so-called blackbody spectrum (of intensity as a function of frequency), and the number of photons at each frequency follows a Bose-Einstein distribution (a geometric distribution). The coherence time of thermal light is exceedingly low, and so a Poisson distribution is appropriate in most cases, even when the intensity is so low as to preclude the approximation by a normal distribution.

The intensity of laser light has an exactly Poisson intensity distribution and long coherence times. The large intensities make it appropriate to use the normal distribution.

It is interesting that the classical model of light correlations applies only to laser light, which is a macroscopic quantum phenomenon. On the other hand, "ordinary" light sources do not follow the "classical" model or the normal distribution.

Measurement errors

Normality is the central assumption of the mathematical theory of errors. Similarly, in statistical model-fitting, an indicator of goodness of fit is that the residuals (as the errors are called in that setting) be independent and normally distributed. Any deviation from normality needs to be explained. In that sense, both in model-fitting and in the theory of errors, normality is the only observation that need not be explained, being expected.

Repeated measurements of the same quantity are expected to yield results which are clustered around a particular value. If all major sources of errors have been taken into account, it is assumed that the remaining error must be the result of a large number of very small additive effects, and hence normal. Deviations from normality are interpreted as indications of systematic errors which have not been taken into account.

Physical characteristics of biological specimens

The overwhelming biological evidence is that bulk growth processes of living tissue proceed by multiplicative, not additive, increments, and that therefore measures of body size should at most follow a lognormal rather than normal distribution. Despite common claims of normality, the sizes of plants and animals is approximately lognormal. The evidence and an explanation based on models of growth was first published in the classic book

Huxley, Julian: Problems of Relative Growth (1932)

Differences in size due to sexual dimorphism, or other polymorphisms like the worker/soldier/queen division in social insects, further make the joint distribution of sizes deviate from lognormality.

The assumption that linear size of biological specimens is normal leads to a non-normal distribution of weight (since weight/volume is roughly the 3rd power of length, and Gaussian distributions are only preserved by linear transformations), and conversely assuming that weight is normal leads to non-normal lengths. This is a problem, because there is no a priori reason why one of length, or body mass, and not the other, should be normally distributed. Lognormal distributions, on the other hand, are preserved by powers so the "problem" goes away if lognormality is assumed.

On the other hand, there are some biological measures where normality is assumed or expected:

  • blood pressure of adult humans is supposed to be normally distributed, but only after separating males and females into different populations (each of which is normally distributed)
  • The length of inert appendages such as hair, nails, teeth, claws and shells is expected to be normally distributed if measured in the direction of growth. This is because the growth of inert appendages depends on the size of the root, and not on the length of the appendage, and so proceeds by additive increments. Hence, we have an example of a sum of very many small increments (possibly lognormal) approaching a normal distribution. Another plausible example is the width of tree trunks, where a new thin ring is produced every year whose width is affected by a large number of factors.

Financial variables

Because of the exponential nature of interest and inflation, financial indicators such as interest rates, stock values, or commodity prices make good examples of multiplicative behavior. As such, they should not be expected to be normal, but lognormal.

Benoît Mandelbrot, the popularizer of fractals, has claimed that even the assumption of lognormality is flawed, and advocates the use of log-Levy distributions.

It is accepted that financial indicators deviate from lognormality. The distribution of price changes on short time scales is observed to have "heavy tails", so that very small or very large price changes are more likely to occur than a lognormal model would predict. Deviation from lognormality indicates that the assumption of independence of the multiplicative influences is flawed.

Lifetime

Other examples of variables that are not normally distributed include the lifetimes of humans or mechanical devices. Examples of distributions used in this connection are the exponential distribution (memoryless) and the Weibull distribution. In general, there is no reason that waiting times should be normal, since they are not directly related to any kind of additive influence.

Test scores

A great deal of confusion exists over whether or not IQ test scores and intelligence are normally distributed. While for most practical purposes the distributions of IQ and intelligence (or at least psychometric g) can be seen as the same thing, it is important to distinguish between the two terms when discussing whether they are normally distributed.

As a deliberate result of test construction, IQ scores are always and obviously normally distributed for the majority of the population. The fact that intelligence is normally distributed is less clear. The difficulty and number of questions on an IQ test is decided based on which combinations will yield a normal distribution. This does not mean, however, that the information is in any way being misrepresented, or that there is any kind of "true" distribution that is being artificially forced into the shape of a normal curve. Intelligence tests can be constructed to yield any kind of score distribution desired. All true IQ tests have a normal distribution of scores as a result of test design; otherwise IQ scores would be meaningless without knowing what test produced them. Intelligence tests in general, however, can produce any kind of distribution.

For an example of how arbitrary the distribution of intelligence test scores really is, imagine a 20-item multiple-choice test entirely composed of problems that consist mostly of finding the areas of circles. Such a test, if given to a population of high-school students, would likely yield a U-shaped distribution, with the bulk of the scores being very high or very low, instead of a normal curve. If a student understands how to find the area of a circle, he can likely do so repeatedly and with few errors, and thus would get a perfect or high score on the test, whereas a student who has never had geometry lessons would likely get every question wrong, possibly with a few right due to guessing luck. If a test is composed mostly of easy questions, then most of the test-takers will have high scores and very few will have low scores. If a test is composed entirely of questions so easy or so hard that every person gets either a perfect score or a zero, it fails to make any kind of statistical discrimination at all and yields a rectangular distribution. These are just a few examples of the many varieties of distributions that could theoretically be produced by carefully designing intelligence tests.

Whether intelligence itself is normally distributed has been at times a matter of some debate. Some critics maintain that the choice of a normal distribution is entirely arbitrary. Brian Simon once claimed that the normal distribution was specifically chosen by psychometricians to falsely support the idea that superior intelligence is only held by a small minority, thus legitimizing the rule of a privileged elite over the masses of society. Historically, though, intelligence tests were designed without any concern for producing a normal distribution, and scores came out approximately normally distributed anyway. American educational psychologist Arthur Jensen claims that any test that contains "a large number of items," "a wide range of item difficulties," "a variety of content or forms," and "items that have a significant correlation with the sum of all other scores" will inevitably produce a normal distribution. Furthermore, there exists a number of correlations between IQ scores and other human characteristics that are more provably normally distributed, such as nerve conduction velocity and the glucose metabolism rate of a person's brain, supporting the idea that intelligence is normally distributed.

Some critics, such as Stephen Jay Gould in his book The Mismeasure of Man, question the validity of intelligence tests in general, not just the fact that intelligence is normally distributed. For further discussion see the article IQ.

The Bell Curve is a controversial book on the topic of the heritability of intelligence. However, despite its title, the book does not primarily address whether IQ is normally distributed.

Estimation of parameters

Maximum likelihood estimation of parameters

Suppose

<math>X_1,\dots,X_n<math>

are independent and identically distributed, and are normally distributed with expectation μ and variance σ2. In the language of statisticians, the observed values of these random variables make up a "sample from a normally distributed population." It is desired to estimate the "population mean" μ and the "population standard deviation" σ, based on observed values of this sample. The joint probability density function of these random variables is

<math>f(x_1,\dots,x_n) \propto \sigma^{-n} \prod_{i=1}^n \exp\left({-1 \over 2} \left({x_i-\mu \over \sigma}\right)^2\right).<math>

(Nota bene: Here the proportionality symbol <math>\propto<math> means proportional as a function of <math>\mu<math> and <math>\sigma<math>, not proportional as a function of <math>x_1,\dots,x_n<math>. That may be considered one of the differences between the statistician's point of view and the probabilist's point of view. The reason why this is important will appear below.)

As a function of μ and σ this is the likelihood function

<math>L(\mu,\sigma) \propto \sigma^{-n} \exp\left({-\sum_{i=1}^n (x_i-\mu)^2 \over 2\sigma^2}\right).<math>

In the method of maximum likelihood, the values of μ and σ that maximize the likelihood function are taken to be estimates of the population parameters μ and σ.

Usually in maximizing a function of two variables one might consider partial derivatives. But here we will exploit the fact that the value of μ that maximizes the likelihood function with σ fixed does not depend on σ. Therefore, we can find that value of μ, then substitute it from μ in the likelihood function, and finally find the value of σ that maximizes the resulting expression.

It is evident that the likelihood function is a decreasing function of the sum

<math>\sum_{i=1}^n (x_i-\mu)^2. \,\!<math>

So we want the value of μ that minimizes this sum. Let

<math>\overline{x}=(x_1+\cdots+x_n)/n<math>

be the "sample mean". Observe that

<math>\sum_{i=1}^n (x_i-\mu)^2=\sum_{i=1}^n((x_i-\overline{x})+(\overline{x}-\mu))^2<math>
<math>=\sum_{i=1}^n(x_i-\overline{x})^2 + 2\sum_{i=1}^n (x_i-\overline{x})(\overline{x}-\mu) + \sum_{i=1}^n (\overline{x}-\mu)^2

<math>

<math>

=\sum_{i=1}^n(x_i-\overline{x})^2 + 0 + n(\overline{x}-\mu)^2. <math>

Only the last term depends on μ and it is minimized by

<math>\hat{\mu}=\overline{x}.<math>

That is the maximum-likelihood estimate of μ. Substituting that for μ in the sum above makes the last term vanish. Consequently, when we substitute that estimate for μ in the likelihood function, we get

<math>L(\overline{x},\sigma) \propto \sigma^{-n} \exp\left({-\sum_{i=1}^n (x_i-\overline{x})^2 \over 2\sigma^2}\right).<math>

It is conventional to denote the "loglikelihood function", i.e., the logarithm of the likelihood function, by a lower-case <math>\ell<math>, and we have

<math>\ell(\hat{\mu},\sigma)=[\mathrm{constant}]-n\log(\sigma)-{\sum_{i=1}^n(x_i-\overline{x})^2 \over 2\sigma^2}<math>

and then

<math>{\partial \over \partial\sigma}\ell(\hat{\mu},\sigma)

={-n \over \sigma} +{\sum_{i=1}^n (x_i-\overline{x})^2 \over \sigma^3} ={-n \over \sigma^3}\left(\sigma^2-{1 \over n}\sum_{i=1}^n (x_i-\overline{x})^2 \right).<math>

This derivative is positive, zero, or negative according as σ2 is between 0 and

<math>{1 \over n}\sum_{i=1}^n(x_i-\overline{x})^2,<math>

or equal to that quantity, or greater than that quantity.

Consequently this average of squares of residuals is maximum-likelihood estimate of σ2, and its square root is the maximum-likelihood estimate of σ.

Surprising generalization

The derivation of the maximum-likelihood estimator of the covariance matrix of a multivariate normal distribution is perhaps surprisingly subtle and elegant. It involves the spectral theorem and the reason why it can be better to view a scalar as the trace of a 1×1 matrix than as a mere scalar. See estimation of covariance matrices.

Unbiased estimation of parameters

The maximum likelihood estimator of the population mean <math>\mu<math> from a sample is an unbiased estimator of the mean, as is the variance when the mean of the population is known a priori. However, if we are faced with a sample and no knowledge of the mean or the variance of the population from which it is drawn, the unbiased estimator of the variance <math>\sigma^2<math> is:

<math>

s^2 = \frac{1}{n-1} \sum_{i=1}^n (x_i - \overline{x})^2 .<math>

See also

References

External links

de:Normalverteilung es:Distribución normal fr:Loi normale he:התפלגות נורמלית it:Variabile casuale normale ja:正規分布 ko:정규 분포 lt:Normalusis skirstinys lv:Normālsadalījums nl:Normale verdeling pl:Rozkład normalny su:Sebaran normal fi:Normaalijakauma sv:Normalfördelning ru:Нормальное распределение zh:常態分布

Navigation

  • Art and Cultures
    • Art (https://academickids.com/encyclopedia/index.php/Art)
    • Architecture (https://academickids.com/encyclopedia/index.php/Architecture)
    • Cultures (https://www.academickids.com/encyclopedia/index.php/Cultures)
    • Music (https://www.academickids.com/encyclopedia/index.php/Music)
    • Musical Instruments (http://academickids.com/encyclopedia/index.php/List_of_musical_instruments)
  • Biographies (http://www.academickids.com/encyclopedia/index.php/Biographies)
  • Clipart (http://www.academickids.com/encyclopedia/index.php/Clipart)
  • Geography (http://www.academickids.com/encyclopedia/index.php/Geography)
    • Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries)
    • Maps (http://www.academickids.com/encyclopedia/index.php/Maps)
    • Flags (http://www.academickids.com/encyclopedia/index.php/Flags)
    • Continents (http://www.academickids.com/encyclopedia/index.php/Continents)
  • History (http://www.academickids.com/encyclopedia/index.php/History)
    • Ancient Civilizations (http://www.academickids.com/encyclopedia/index.php/Ancient_Civilizations)
    • Industrial Revolution (http://www.academickids.com/encyclopedia/index.php/Industrial_Revolution)
    • Middle Ages (http://www.academickids.com/encyclopedia/index.php/Middle_Ages)
    • Prehistory (http://www.academickids.com/encyclopedia/index.php/Prehistory)
    • Renaissance (http://www.academickids.com/encyclopedia/index.php/Renaissance)
    • Timelines (http://www.academickids.com/encyclopedia/index.php/Timelines)
    • United States (http://www.academickids.com/encyclopedia/index.php/United_States)
    • Wars (http://www.academickids.com/encyclopedia/index.php/Wars)
    • World History (http://www.academickids.com/encyclopedia/index.php/History_of_the_world)
  • Human Body (http://www.academickids.com/encyclopedia/index.php/Human_Body)
  • Mathematics (http://www.academickids.com/encyclopedia/index.php/Mathematics)
  • Reference (http://www.academickids.com/encyclopedia/index.php/Reference)
  • Science (http://www.academickids.com/encyclopedia/index.php/Science)
    • Animals (http://www.academickids.com/encyclopedia/index.php/Animals)
    • Aviation (http://www.academickids.com/encyclopedia/index.php/Aviation)
    • Dinosaurs (http://www.academickids.com/encyclopedia/index.php/Dinosaurs)
    • Earth (http://www.academickids.com/encyclopedia/index.php/Earth)
    • Inventions (http://www.academickids.com/encyclopedia/index.php/Inventions)
    • Physical Science (http://www.academickids.com/encyclopedia/index.php/Physical_Science)
    • Plants (http://www.academickids.com/encyclopedia/index.php/Plants)
    • Scientists (http://www.academickids.com/encyclopedia/index.php/Scientists)
  • Social Studies (http://www.academickids.com/encyclopedia/index.php/Social_Studies)
    • Anthropology (http://www.academickids.com/encyclopedia/index.php/Anthropology)
    • Economics (http://www.academickids.com/encyclopedia/index.php/Economics)
    • Government (http://www.academickids.com/encyclopedia/index.php/Government)
    • Religion (http://www.academickids.com/encyclopedia/index.php/Religion)
    • Holidays (http://www.academickids.com/encyclopedia/index.php/Holidays)
  • Space and Astronomy
    • Solar System (http://www.academickids.com/encyclopedia/index.php/Solar_System)
    • Planets (http://www.academickids.com/encyclopedia/index.php/Planets)
  • Sports (http://www.academickids.com/encyclopedia/index.php/Sports)
  • Timelines (http://www.academickids.com/encyclopedia/index.php/Timelines)
  • Weather (http://www.academickids.com/encyclopedia/index.php/Weather)
  • US States (http://www.academickids.com/encyclopedia/index.php/US_States)

Information

  • Home Page (http://academickids.com/encyclopedia/index.php)
  • Contact Us (http://www.academickids.com/encyclopedia/index.php/Contactus)

  • Clip Art (http://classroomclipart.com)
Toolbox
Personal tools