Sensitivity (tests)
|
The sensitivity of a binary classification test or algorithm, such as a blood test to determine if a person has a certain disease, or an automated system to detect faulty products in a factory, is a parameter that expresses something about the test's performance. The sensitivity of such a test is the proportion of those cases having a positive test result of all positive cases (e.g., people with the disease, faulty products) tested.
- <math>{\rm sensitivity}=\frac{\rm number\ of\ true\ positives}{{\rm number\ of\ true\ positives}+{\rm number\ of\ false\ negatives}}.<math>
A sensitivity of 100% means that all sick people or faulty products were recognized as such.
Sensitivity alone does not tell us all about the test, because a 100% sensitivity can be trivially achieved by labeling all test cases positive. Therefore, we also need to know the specificity of the test.
F-measure can be used as a single measure of performance of the test. The F-measure is the geometric mean of sensitivity and specificity:
- <math>F = 2 * precision * recall / (precision + recall)<math>.
In the traditional language of statistical hypothesis testing, the sensitivity of a test is called the statistical power of the test, although the word power in that context has a more general usage that is not applicable in the present context. A sensitive test will have fewer Type II errors.
In information retrieval, sensitivity is called recall.