Standard error (statistics)
|
In statistics, the standard error of a measurement, value or quantity is the standard deviation of the process by which it was generated.
Standard errors provide simple measures of uncertainty in a value and are often used because:
- If the standard error of several individual quantities is known then the standard error of some function of the quantities can be easily calculated in many cases;
- Where the probability distribution of the value is known, they can be used to calculate an exact confidence interval; and
- Where the probability distribution is unknown, relationships like Chebyshev's or the Vysochanskiï-Petunin inequality can be used to calculate a conservative confidence interval.
The standard error of a sample from a population is the standard deviation of the sampling distribution and may estimated by the formula:
- <math>\frac{\sigma}{\sqrt{N}}<math>
where <math>\sigma<math> is the standard deviation of the population distribution and N is the size (number of items) in the sample.
A very important implication of this formula is that you must quadruple the sample size (4X) to achieve half (1/2) the measurement error. When designing statistical studies where cost is a factor, this may have a factor in understanding cost-benefit tradeoffs.