Gold Standard for Disease
|Disease Positive||Disease Negative||Total|
|Test Positive||[a] True Positive||[b] False Positive||Positive [a+b]|
|Test Negative||[c] False Negative||[d] True Negative||Negative [c+d]|
|Total||Diseased [a+c]||Non-Diseased [b+d]||Animals [a+b+c+d]|
- Sensitivity: Probability that a truly diseased animal will test positive. [ a/a+c ]
- Specificity: Probability that a truly healthy (non-diseased) animal will test negative. [ d/b+d ]
- Apparent Prevalence: Percentage of test positive animals from the total population tested. [ a+b/a+b+c+d ]
- True Prevalence: Percentage of truly diseased animals from the total population tested. [ a+c/a+b+c+d ]
- Positive Predictive Value: Percentage of animals that test positive and truly are diseased. [ a/a+b ]
- Negative Predictive Value: Percentage of animals that test negative and are truly not diseased. [ d/c+d ]
Is a test Useful?
There are several guidelines that are used to determine if a diagnostic test is clinically useful when examining a report of a new diagnostic test:
- The first consideration is whether there has been an independent masked comparison to a "Gold" standard of diagnosis. If not, you may question whether the data collected using this test is useful.
- Has the test been evaluated in patients with acute versus chronic disease, mild versus severe disease, treated versus untreated animals, and animals with other similar, but different, disorders?
- Was the setting of the study adequately described?
- Has the reproducibility (repeatability) of the test been determined? Has the interpretation of the precision (observer variation) been determined?
- Has "normal" been defined sensibly for this test? Is the normal range representative of the population?
- If this test has been used in a sequence of tests, has it's contribution to the overall validity of the diagnostic effort been determined?
- Based on the information provided, would you be able to replicate this test in a population?
- Has the utility of the test been determined?
The diagnostic test, if useful in the diagnostic process, should provide an accurate diagnosis, support application of specific treatments, and hopefully should lead to a better clinical outcome.
Examples of reasons for false positive (FP) and false negative (FN) test results
False Positive (FP):
- Group cross-reactions between antibodies to different organisms with similar epitopes on antigens, e.g. Mycobacterium sp.
- Non-specific inhibitors that mimic the effects of antibody in its absence.
- Agglutination of antigen by non-specific agglutinins.
False Negative (FN):
- Natural or induced tolerance to the antigens, e.g. animals persistently infected with BVDvirus.
- Improper timing due to stage of infection or appearance of antibody.
- Improper selection of the test such as a neutralizing antibody test when that type of antibody is not produced.
- Non-specific inhibitors - anti-complementary serum, toxic substances.
- Antibiotic induced immunoglobulin suppression.
- Incomplete or blocking antibodies, e.g CFT testing for Brucella.
- Insensitive tests.
When using serologic diagnostic tests, they may be qualitative or quantitative. When using a qualitative test, the result is either positive or negative. This makes for easy determination of the sensitivity and specificity of the test when compared to a "Gold" standard. However, if the data obtained from test results are reported on a continuous scale, determination of cutoff points and alteration of that point may lead to tremendous changes in sensitivity, specificity, false negatives and false positives.
We look at a population of animals and we classify them as either healthy or diseased. When examining an antibody response in populations, there will usually be some overlap as animals may appear healthy, but are indeed diseased and vice versa. This result may be due to laboratory error or perhaps a failure in the validation process. When using a continuous scale for the test result, such as ELISA tests, we must establish a cutoff point which will help in the most accurate detection of healthy and sick animals.
We will use 3 different cutoff points to illustrate the changes that occur in sensitivity and specificity with resultant changes in false negative and positive rates depending on the cutoff point.
The test result in the graph ranges from lowest to highest when observing from left to right. If we set the cutoff point at point A, the resulting test is highly sensitive (100%). However, the specificity is lower, there are no false negatives, but there are a large number of false positive test results. This test would be useful as a screening test for a disease where the cost of a FN is high.
Suppose we set the cutoff at point B. The result would be a test with equal sensitivity and specificity as well as the same number of FPs and FNs. The usefulness of a particular test with these characteristics would be questionable.
Lastly, we will set the cutoff point at point C. This cutoff leads to an insensitive test where the specificity is extremely high (100%). This type of test would be useful in a situation where the cost of a FP is high.
So, as you can see, as you increase the specificity of a test, you lower the sensitivity when the test results are recorded on a continuous scale and vice versa. This information should confirm the importance of understanding how a test was developed, the conditions under which it was studied, the precision of the test and the validity of the test.
Using a Test
The sensitivity and specificity of a test are generably considered to be fixed and yield different predictive values depending on the prevalence of disease in the population you are examining. This can be used to your advantage, based on the information you want to generate. For example, you may elect not to test for a condition where the prevalence of the disease is low, because it may be very difficult to interpret a positive test result. However, it may be very important if you receive negative test results, and therefore, in that instance the value of a negative confirms your belief of "non-diseased". As the prevalence of disease increases, so does the positive predictive value. Conversely, as the prevalence decreases, the negative predictive value increases. The following graph may help to illustrate this point:
The relationship between prevalence and predictive values can be visualized in the above graph. As you can see, changes in prevalence do result in changes in predictive values, positive 1 way and negative the other. This information is important for application of diagnostic tests in both the individual animal and in the larger populations.
- Veterinary Epidemiology, 2nd Edition. Michael Thrusfield, Author. Blackwell Science, Malden, MA, USA. 1995.
- Sackett DL, RB Haynes, Tugwell. Clinical Epidemiology: A Basic Science for Clinical Medicine, 2nd ed. 1991. Little, Brown, and Company, Boston/Toronto. 441 pp.