Variance

 This article is about mathematics. Alternate meaning: variance (land use).
In probability theory and statistics, the variance of a random variable is a measure of its statistical dispersion, indicating how far from the expected value its values typically are. The variance of a realvalued random variable is its second central moment, and also its second cumulant (cumulants differ from central moments only at and above degree 4).
Contents 
Definition
If μ = E(X) is the expected value (mean) of the random variable X, then the variance is
 <math>\operatorname{var}(X)=\operatorname{E}((X\mu)^2).<math>
That is, it is the expected value of the square of the deviation of X from its own mean. In plain language, it can be expressed as "The average of the square of the distance of each data point from the mean". It is thus the mean squared deviation. The variance of random variable X is typically designated as <math>\operatorname{var}(X)<math>, <math>\sigma_X^2<math>, or simply <math>\sigma^2<math>.
Note that the above definition can be used for both discrete and continuous random variables.
Many distributions, such as the Cauchy distribution, do not have a variance because the relevant integral diverges. In particular, if a distribution does not have expected value, it does not have variance either. The opposite is not true: there are distributions for which expected value exists, but variance does not.
Properties
If the variance is defined, we can conclude that it is never negative because the squares are positive or zero. The unit of variance is the square of the unit of observation. For example, the variance of a set of heights measured in centimeters will be given in square centimeters. This fact is inconvenient and has motivated many statisticians to instead use the square root of the variance, known as the standard deviation, as a summary of dispersion.
It can be proven easily from the definition that the variance does not depend on the mean value <math>\mu<math>. That is, if the variable is "displaced" an amount b by taking X+b, the variance of the resulting random variable is left untouched. By contrast, if the variable is multiplied by a scaling factor a, the variance is multiplied by a^{2}. More formally, if a and b are real constants and X is a random variable whose variance is defined,
 <math>\operatorname{var}(aX+b)=a^2\operatorname{var}(X)<math>
Another formula for the variance that follows in a straightforward manner from the above definition is:
 <math>\operatorname{var}(X)=\operatorname{E}(X^2)  (\operatorname{E}(X))^2.<math>
This is often used to calculate the variance in practice.
One reason for the use of the variance in preference to other measures of dispersion is that the variance of the sum (or difference) of independent random variables is the sum of their variances. A weaker condition than independence, called uncorrelatedness also suffices. In general,
 <math>\operatorname{var}(aX+bY) =a^2 \operatorname{var}(X) + b^2 \operatorname{var}(Y)
+ 2ab \operatorname{cov}(X, Y).<math>
Here <math>\operatorname{cov}<math> is the covariance, which is zero for uncorrelated random variables.
Population variance and sample variance
In statistics, the concept of variance can also be used to describe a set of data. When the set of data is a population, it is called the population variance. If the set is a sample, we call it the sample variance.
The population variance of a finite population y_{i} where i = 1, 2, ..., N is given by
 <math>\sigma^2 = \frac{1}{N} \sum_{i=1}^N
\left( y_i  \mu \right) ^ 2,<math>
where <math>\mu<math> is the population mean. In practice, when dealing with large populations, it is almost never possible to find the exact value of the population variance, due to time, cost, and other resource constraints.
A common method of estimating the population variance is sampling. When estimating the population variance using n random samples x_{i} where i = 1, 2, …, n, the following formula is an unbiased estimator:
 <math>s^2 = \frac{1}{n1} \sum_{i=1}^n
\left( x_i  \overline{x} \right) ^ 2,<math>
where <math>\overline{x}<math> is the sample mean.
Note that the n − 1 in the denominator above contrasts with the equation for <math>\sigma^2<math>. One common source of confusion is that the term sample variance may refer to either the unbiased estimator <math>s^2<math> of the population variance given above, or to what is strictly speaking the variance of the sample, computed by using n instead of n − 1.
Intuitively, computing the variance by dividing by n instead of n − 1 gives an underestimate of the population variance. This is because we are using the sample mean <math>\overline{x}<math> as an estimate of the population mean <math>\mu<math>, which we do not know. In practice, for large n, the distinction is often a minor one.
An unbiased estimator
We will demonstrate why <math>s^2<math> is an unbiased estimator of the population variance. An estimator <math>\hat{\theta}<math> for a parameter <math>\theta<math> is unbiased if <math>\operatorname{E}\{ \hat{\theta}\} = \theta<math>. Therefore, to prove that <math>s^2<math> is unbiased, we will show that <math>\operatorname{E}\{ s^2\} = \sigma^2<math>. As an assumption, the population which the <math>x_i<math> are drawn from has mean <math>\mu<math> and variance <math>\sigma^2<math>.
 <math> \operatorname{E} \{ s^2 \}
= \operatorname{E} \left\{ \frac{1}{n1} \sum_{i=1}^n \left( x_i  \overline{x} \right) ^ 2 \right\}
<math>
 <math>
= \frac{1}{n1} \sum_{i=1}^n \operatorname{E} \left\{ \left( x_i  \overline{x} \right) ^ 2 \right\}
<math>
 <math>
= \frac{1}{n1} \sum_{i=1}^n \operatorname{E} \left\{ \left( (x_i  \mu)  (\overline{x}  \mu) \right) ^ 2 \right\}
<math>
 <math>
= \frac{1}{n1} \sum_{i=1}^n \operatorname{E} \left\{ (x_i  \mu)^2 \right\}
 2 \operatorname{E} \left\{ (x_i  \mu) (\overline{x}  \mu) \right\}
+ \operatorname{E} \left\{ (\overline{x}  \mu) ^ 2 \right\}
<math>
 <math>
= \frac{1}{n1} \sum_{i=1}^n \sigma^2
 2 \left( \frac{1}{n} \sum_{j=1}^n \operatorname{E} \left\{ (x_i  \mu) (x_j  \mu) \right\} \right)
+ \frac{1}{n^2} \sum_{j=1}^n \sum_{k=1}^n \operatorname{E} \left\{ (x_j  \mu) (x_k  \mu) \right\}
<math>
 <math>
= \frac{1}{n1} \sum_{i=1}^n \sigma^2
 \frac{2 \sigma^2}{n}
+ \frac{\sigma^2}{n}
<math>
 <math>
= \frac{1}{n1} \sum_{i=1}^n \frac{(n1)\sigma^2}{n} <math>
 <math>
= \frac{(n1)\sigma^2}{n1} = \sigma^2
<math>
See also algorithms for calculating variance.
Generalizations
If X is a vectorvalued random variable, with values in R^{n}, and thought of as a column vector, then the natural generalization of variance is E[(X − μ)(X − μ)^{T}], where μ = E(X) and X^{T} is the transpose of X, and so is a row vector. This variance is a nonnegativedefinite square matrix, commonly referred to as the covariance matrix.
If X is a complexvalued random variable, then its variance is E[(X − μ)(X − μ)^{*}], where X^{*} is the complex conjugate of X. This variance is a nonnegative real number.
History
The term variance was first introduced by Ronald Fisher in 1918 paper The Correlation Between Relatives on the Supposition of Mendelian Inheritance.
Moment of inertia
The variance of a probability distribution is equal to the moment of inertia in classical mechanics of a corresponding linear mass distribution, with respect to rotation about its center of mass. It is because of this analogy that such things as the variance are called moments of probability distributions.
See also
 expected value
 standard deviation
 skewness
 kurtosis
 statistical dispersion
 an inequality on location and scale parameters
 law of total variancede:Varianz
es:Varianza fr:Variance it:Varianza nl:Variantie ja:分散 no:Varians pl:Wariancja pt:Variância su:Varian sv:Varians