Reliability (statistics)
|
In statistics reliability is the accuracy of the scores of a measure. Reliability does not imply validity. That is, a reliable measure is measuring something accurately, but not necessarily what it is supposed to be measuring. For example, while there are many reliable tests, not all of them would validly predict job performance.
Estimation
Reliability may be estimated through a variety of methods that fall into two types: Single-administration and multiple-administration. Multiple-administration methods require that two assessments are administered. In the test-retest method, reliability is estimated as the Pearson product-moment correlation coefficient between two administrations of the same measure. In the alternate forms method, reliability is estimated by the Pearson product-moment correlation coefficient of two different forms of a measure, usually administered together. Single-administration methods include split-half and internal consistency. The split-half method treats the two halves of a measure as alternate forms. This "halves reliability" estimate is then stepped up to the full test length using the Spearman-Brown prediction formula. The most common internal consistency measure is Cronbach's α, which is usually interpreted as the mean of all possible split-half coefficients.
Each of these estimation methods is sensitive to different sources of error and so might not be expected to be equal. Also, reliability is a property of the scores of a measure rather than the measure itself and are thus said to be sample dependent. Reliability estimates from one sample might differ from those of a second sample (beyond what might be expected due to sampling variations) if the second sample is drawn from a different population because the true reliability is different in this second population. (This is true of measures of all types--yardsticks might measure houses well yet have poor reliability when used to measure the lengths of insects.)
Reliability may be improved by clarity of expression (for written assessments), lengthening the measure, and other informal means. However, formal psychometric analysis is considered the most effective. Such analysis generally involves computation of item statistics such as the item-total correlation (the correlation between the item score and sum of the item scores of the entire test). These measures are inherently circular but in practice they work well if the test has been constructed carefully so that its initial draft contains sufficient reliability.
Classical test theory
In classical test theory, reliability is defined mathematically as the ratio of the variation of the true score and the variation of the observed score. Or, equivalently, one minus the ratio of the variation of the error score and the variation of the observed score:
<math>{\rho}_{xx'}=\frac{{\sigma}^2_T}{{\sigma}^2_X}=1-\frac{{{\sigma}^2_E}}{{{\sigma}^2_X}}<math>
where <math>{\rho}_{xx'}<math> is the symbol for the reliability of the observed score, X; <math>{\sigma}^2_X<math>, <math>{\sigma}^2_T<math>, and <math>{\sigma}^2_E<math> are the variances of the observed, true and error scores, respectively.
Item response theory
It was well-known to classical test theorists that measurement precision is not uniform across the scale of measurement. Tests tend to distinguish better for test-takers with moderate trait levels and worse among high- and low-scoring test-takers. Item response theory extends the concept of reliability from a single index to a function called the information function. The IRT information function is the inverse of the conditional observed score standard error at any given test score. Higher levels of IRT information indicate higher precision and thus greater reliability.