There has been a lot in the news recently about using antibody tests to detect people who have had Covid-19 and who might therefore be immune to further infection (this remains to be proven). Despite the relatively good performance of many of the proposed antibody tests, with sensitivity and specificity above 95% in most cases, the low prevalence of infection means that the Positive Predictive Value (PPV) of these tests is low.
For example, a test which has 98% sensitivity (ability to detect true positives) and specificity (ability to avoid false positives) will still give a false positive rate of nearly 30% if the underlying rate of infection is around 5% (i.e. a PPV of 72.1%). Thus, for every 1000 people tested there will be on average 50 people with antibodies to SARS-CoV-2 and the test will correctly detect 49 of those 50, This is the result of the test being 98% sensitive. However, the number of false positives in the 950 people who do not have antibodies will be 19. Despite the high specificity of the test, the high number of non-infected people means that the 2% of them that give a false positive result will be a large proportion of the overall number of positive tests: in this case 19 out of 68. This has quite rightly been pointed out by health authorities as insufficient to qualify people as ‘immune’.
However, there is a relatively simple solution. By using two independent antibody tests, with similar specificity and sensitivity, PPV can be increased to 99.2% if we consider as positive only those who are positive under both tests. This is because 48 of the previous positives will again be positive (98%of 49) but no more than 1 of the 19 false positives will give a second positive test (only 0.4 people, i.e. 2% of 19). The two tests would have to be truly independent, but as there are now numerous tests available (or in development) it should be possible to find two that can be combined to achieve a PPV that can be useful for making public health decisions.
What we have done here, in effect, is to increase the underlying rate of true positives for the second test (to 72%, 49 of the initially 68 positive tests). Under these conditions our antibody test meets our expectations of what a 98% test should do.
Too often we look at a test performance without considering the underlying rate. Tests to predict low incidence disease must be extremely good to be useful as diagnostic tools simply because of the large number of true negatives. The same holds true for analysis of most scientific experiments: we use an alpha of 0.05 and a beta of 0.2 to decide if out studies are significant. This is the same as 95% sensitivity and 80% specificity. In the example above that would result in 48 true positives but 190 false positives!
We rarely know what the true underlying rate of true positives is in our experiments. We might suspect that for structured, confirmatory studies it is quite high, maybe above 50%. But for exploratory studies, screening of compound libraries etc. it might be less than 5%. Interpretation and analysis of these experiments needs to consider such differences if we want to reach robust conclusions. With the importance of underlying rate for calculating PPV now getting a more public airing, we can only hope that it will be considered more in data analysis.

Additional note:
Whilst writing this piece, the new FDA-approved test by Roche claims 100% sensitivity and 99.8% specificity, which would get the PPV up to 96.3. This shows that the bar needs to be set very high for low-incidence events, but that Roche seem to have succeeded.