I noticed this meme on Twitter – with many saying something like “Did you hear about the doctor that sent a batch of unused Covid-19 tests to the lab and they all came back positive?”
This sounds like a typical fake social media meme. The Poynter Institute notes there is no evidence this ever occurred; however, they then go on to make a statistical error.
No Evidence: U.S. hospital laboratories falsify COVID-19 tests to inflate the number of positive results. Nurses sent unused swabs for COVID-19 testing and they all came back positive.
Explanation: There is no evidence that hospital laboratories have falsified the number of COVID-19 positives. While false-positives can occur, experts agree that they are a rare event. On the contrary, false-negatives are hard to control as they can be due to many different factors apart from test accuracy, including the type of specimen analyzed, the sample handling, or the stage of infection. Failing to correctly identify infected people would promote disease transmission and constitute a public health risk.
Source: No Evidence: U.S. hospital laboratories falsify COVID-19 tests to inflate the number of positive results. Nurses sent unused swabs for COVID-19 testing and they all came back positive. – Poynter
Reporters do not understand simple statistical concepts. I’ve highlighted in bold their misunderstanding. This is true that most tests have a low false positive rate, especially for the RT-PCR test which is said to be close to 100% accurate (but may not be – even a test with a low FP rate can produce bad results when handled improperly from collection through the lab, even producing a 1/3d error rate.) Some are now using so-called “rapid tests” for screening large groups of asymptomatic people and this is where the false positive problem mushrooms.
For example, some colleges in my state are using rapid tests to screen about 1,000 randomly selected students per week in order to understand the prevalence of the disease in the college community.
Some of the tests are on the order of 94-98% reliable, meaning their false positive rate may be 2-6%. (A paper in The Lancet says that the FP rate in the UK appears to be 0.8% to 4%). That sounds good, doesn’t it?
Until you think about it. Let’s assume our false positive (FP) rate is 2%. Which would be good for a rapid test.
Let’s assume that among the 1,000 students tested, we know from other data that about 1 in 500 currently have the disease. This means when we run our screen, we expect to find 2 actual cases of the disease.
However, because we expect a 2% FP rate, this means that 20 students will test positive who do not have the disease.
We do not know which of the 22 students (2 + 20) is actually ill so we require all of them to quarantine for 14 days.
In this simple scenario we found 2 actual cases and 20 false cases. That means our test was 9% accurate and 91% inaccurate. Which is very different than the intuitive “2% FP rate” which sounds great.
This problem occurs in any screening situation when the prevalence of the item being looked for is very low. Because of this, the makers of rapid tests generally have instructions saying positive tests should be followed by the more accurate RT-PCR test. It appears this was not being done for at least some of the college students (I asked the university but they did not reply). Specimens were collected from large groups of students, analyzed, and students received a text update within 90 minutes; this sounds like rapid tests. Some of the students who tested positive immediately went to a doctor’s office and were re-tested and the 2nd test was negative. The University still made them quarantine for 14 days where they remained asymptomatic.
While Poynter is correct that an FP is rare, this is true in the context of testing a single individual, and especially when they already have symptoms. Earlier, we tended to test symptomatic people and we may have found that 10% tested positive. In other words, for every 100 people we test, we’d expect to find 10 true positives, and 2% or 2 out of 100 FPs. When the prevalence is low, this produces far more accurate results – 10 TP and 2 FP, completely reversed from the scenario above.
But now, we are starting to screen large groups of randomly selected (no symptoms) individuals using tests with slightly less accurate diagnostic capability.
As you can see above, even a very low FP rate can translate into numerous false positives when the test is used to screen large groups when the prevalence is low. When the prevalence is high, the FP problem mostly goes away.
Everyone in health care knows this but none wish to talk about it. In their view, they want the screen to catch everyone who may be truly positive – and are okay with erring on that side – at the risk that many of those tagged in this manner are false positives.