False negative
|
A false negative, also called a miss, exists when a test reports, incorrectly, that a signal was not detected when, in fact, was present. Detection algorithms of all kinds often create misses. For example, if a radar does not detect an enemy air plane when an enemy air plane is present within the radar scanned area, that is a false negative. In statistical hypothesis testing, a false negative test which accepts the null hypothesis when it is false is called a Type II error. The false negative rate equals 1 minus the specificity of the test.
When developing detection algorithms (that is, tests), there is a tradeoff between false negatives and false positives (in which the algorithm reports a match when there actually is none). That is, the risk of Type II errors must be balanced against the risk of Type I errors (false positives that reject the null hypothesis when it is true). Usually there is a threshold of how close a match to a given sample must be achieved before the algorithm reports a match. The higher this threshold, the more false negatives and the fewer false positives.
Medical Example
False negatives are a significant issue in medical testing. In some cases, there are two or more (often many) tests that could be used, one of which is simpler and less expensive, but less accurate, than the other. For example, the simplest tests for HIV and hepatitis in blood have a significant rate of false positives. These tests are used to screen out possible blood donors, but more expensive and more precise tests are used in medical practice, to determine whether a person is actually infected with these diseases.
False negatives in medical testing provide false, incorrect reassurance to both patients and physicians that patients are free of disease which is actually present. This in turn leads to people receiving inappropriate understanding and a lack of better advice and treatment to better protect their interests. A common example is relying on cardiac stress tests to detect coronary atherosclerosis, even though cardiac stress tests are known to only detect limitations of coronary artery blood flow due to advanced stenosis.
False negatives produce serious and counterintuitive problems, especially when the condition being searched for is common. If a test with a false negative rate of only 10%, is used to test a population with a true occurrence rate of 70%, many of the "negatives" detected by the test will be falsely incorrect. See Bayes' Theorem below.
Biometric Example
False negatives are also a problem in biometric scans, such as retina scans or facial recognition, when the scanner incorrectly identifies someone as not matching a known person, when in actually, it is the same person whose scan was in the system.
Bayes' Theorem
The probability that an observed negative result is a false negative versus a true negative may be calculated (and the problem of false negatives demonstrated) using Bayes' theorem. The key concept of Bayes' theorem is that the true rates of false positives and false negatives are not a function of the accuracy of the test alone, but also the actual rate within the population. Often, the more powerful issue is the actual rates of the condition within the sample being tested.
See Also: False positive