The obscure mathematical theorem that governs the reliability of Covid tests | Coronavirus

0
32



[]).push(function () { viAPItag.display(“vi_1088641796”) }) || []).push(function () { viAPItag.display(“vi_1088641796”) })

Math quiz. If you take a Covid test that only gives a false positive once in 1,000, how likely is it that you actually have Covid? It’s probably 99.9%, right?

No! The correct answer is: you have no idea. You don’t have enough information to make a judgment.

It’s important to know this when you think of ‘lateral flow tests’ (LFT), the rapid tests for Covid that the government has made available to everyone in England, for free, up to two times. per week. The idea is that over time they could be used to give people permission to go to crowded social spaces – pubs, theaters – and be safer that they don’t have and therefore won’t spread the disease. They have been used in high schools for quite some time now.

There are concerns about FTL. The first is whether they will miss a large number of cases, because they are less sensitive than the slower but more accurate polymerase chain reaction (PCR) test. These concerns are understandable, although supporters of the test say the PCR test is too much sensitive, capable of detecting viral material in people with the disease weeks ago, whereas LFTs should, in theory, only detect infectious people.

But another concern is that they will tell people that they do have the disease when in fact they don’t – they will return false positives.

The government says – specifically – that the “false positive rate,” the probability that a test will give a positive result in a person who does not have the disease, is less than one in 1,000. is where we came in: you might think that means, if you’ve had a positive result, that there’s less than a 1 in 1000 chance that it’s wrong.

It’s not. And that’s because of a fascinating little mathematical anomaly known as Bayes’ Theorem, named after Reverend Thomas Bayes, an 18th century clergyman and math nerd.

Bayes’ theorem is written, in mathematical notation, as P (A | B) = (P (B | A) P (A)) / P (B). It looks complicated. But you don’t have to worry about the meaning of all these symbols: it’s pretty easy to understand when you think of an example.

Thomas Bayes, author of Bayes' theorem.
Thomas Bayes, author of Bayes’ theorem.

Imagine being tested for a rare disease. The test is incredibly accurate: if you have the disease, it will say it correctly 99% of the time; if you don’t have the disease, it will say it correctly 99% of the time.

But the disease in question is very rare; only one in 10,000 people has it. This is called your “prior probability”: the base rate in the population.

So now imagine you are testing 1 million people. There are 100 people with the disease: your test correctly identifies 99 of them. And there are 999,900 people who don’t: your test correctly identifies 989,901.

More this means that your test, although it gave the correct answer 99% of the time, revealed 9,999 people to have the disease, when in fact it did not. So if you get a positive result, in this case your chances of really have the disease is 99 out of 9,999, or just under 1%. If you took this test entirely at face value, you would frighten a lot of people and send them for intrusive and potentially dangerous medical procedures, on the back of a misdiagnosis.

Without knowing the prior probability, you don’t know what the probability is that a result is false or true. If the disease weren’t so rare – if, say, 1% of people had it – your results would be totally different. Then you would have 9,900 false positives, but also 9,990 true positives. So if you had a positive result, there would be more than a 50% chance that it was true.

It is not a hypothetical problem. A review of the literature found that 60% of women who have an annual mammogram for 10 years have at least one false positive; another study found that 70% of positive prostate cancer screening results were false. A prenatal screening procedure for fetal chromosome disorders that claimed “detection rates of up to 99% and false positive rates as low as 0.1%” would have actually returned false positives between 45% and 94% of the population. time, because diseases are so rare, according to an article.

A lateral flow test in progress.
A lateral flow test in progress. Photograph: SlavkoSereda / Getty Images

Of course, it’s not that a positive test would immediately be taken as gospel – patients who test positive will receive more comprehensive diagnostic tests – but they will frighten a lot of patients who don’t have cancer or cancer. fetal abnormalities.

A misunderstanding of Bayes’ theorem is not only a problem in medicine. There is a common failure in the courts, the “prosecutor’s error”, which also depends on it.

In 1990, a man named Andrew Deen was convicted of rape and sentenced to 16 years, in part based on DNA evidence. An expert prosecution witness said the risk of the DNA coming from someone else was only 1 in 3 m.

But as a statistics professor explained when calling Deen, it was a question of mixing up two questions: First, how likely would a person’s DNA match the DNA in the sample? , given that she was innocent; and second, what is the probability that they are innocent, if their DNA matched that of the sample? The “prosecutor’s error” is to treat these two issues as the same.

We can treat it just like we did with cancer screenings and Covid tests. Let’s say you just picked your defendant at random from among the UK population (which of course you wouldn’t do, but for simplicity …), which at the time was around 60 million. So your previous probability that a random person is the murderer is one in 60m.

If you ran your DNA test on those 60 million people, you would identify the killer – but you would also get false positives on around 20 innocent people. So even if the DNA test only returns false positives once in 3m, there is still about a 95% chance that someone who tests positive will be innocent.

Of course, in reality, you wouldn’t choose your defendant at random – you would have other evidence, and your prior probability would be greater than one in 60m. But the point is, knowing the likelihood of a false positive on a DNA test doesn’t tell you how likely a person is to be innocent – you need an assessment of how likely they are to be guilty to begin with. You need prior probability. In December 1993, the appeals court overturned Deen’s conviction, saying it was unsure – precisely because the judge and expert witness were surprised by the prosecutor’s error. (It should be noted that he was convicted on the retrial.)

And in 1999, Sally Clark’s heart-wrenching case turned on the prosecutor’s error. She was convicted of murdering her two children, after another expert witness said the risk of Sudden Infant Death Syndrome (SIDS) in a family is one in 73 million. But the witness ignored the prior likelihood – that is, the likelihood of someone being a double murderer, which is, thankfully, even rarer than Sids. This, taken together with other issues – the expert witness failed to take into account that families who have already had one case of AIDS are more likely to have another – led to the cancellation of the Clark’s conviction as well, in 2003.

Back to LFT testing. Suppose the false positive rate in 1000 is correct. But even if it does and you get a positive result, you don’t know how likely it is that you have the virus. What you need to know first is (roughly) what the probability is has been, before taking the test, that you might have had it: your prior likelihood.

Solicitor Sally Clark in the High Court with her husband Stephen.
Solicitor Sally Clark in the High Court with her husband Stephen. Photographie: Dan Chung / The Guardian

At the height of the second wave, around one in 50 people (2% of the population) in England was infected with the virus, according to the Office for National Statistics prevalence survey. This was done with PCR testing, not LFTs, but let’s use this as a standard.

Let’s say you’ve tested 1 million randomly selected people with LFTs (and, for simplicity’s sake, say they detect all real cases – that certainly won’t be true in real life). About 20,000 people are believed to have the disease, and of the 980,000 who do not, that would incorrectly say 980 have it, for a total of 20,980 positive results. So if you tested positive, your chance of a false positive would be 980/20980, or almost 5%. Or, to put it another way, there’s almost a 95% chance that you really have the disease.

Now, however, the prevalence has dropped dramatically – down to around one in 340 people in England. If we follow the same process, we get a very different picture: out of your million people, about 2,950 will have had it. Again, assuming your test identifies them all (and remembering that this won’t actually be true), you’ll have 2,950 true positives and around 997 false. Suddenly your false positive rate is 997/3947, or about 25%. In fact, government data last week showed the false positive rate for LFTs since March 8 to be 18%. This rate will increase if the prevalence decreases – which could become problematic if, for example, it means that an entire class of children has to miss school.

These sums only apply, of course, if you truly test the population at random. If people use the tests because they think there’s a good reason they might be positive – maybe they have symptoms, or have recently been exposed to someone who has had the disease – then your previous probability would be higher and the positive test would be stronger. proof.

Even doctors struggle with Bayesian reasoning. In a 2013 study, 5,000 qualified American doctors were asked to give the probability that a person would have cancer, if 1% of the population had the disease and they received a positive result on a specific test at 90%. The correct answer was about one in 10 people, but even when given a multiple choice answer, almost three-quarters of doctors answered incorrectly.

None of this means that LFTs are a bad idea – I think, with caution, that they will be useful, especially since positive results will be confirmed by PCR, and if the PCR comes back negative, the patient may return at work or at school or whatever. But it’s worth remembering that if you read that a test is 99.9% accurate, that doesn’t mean there is a 99.9% chance that your test result is correct. In fact, it’s much more complicated than that.

Tom Chivers is the Science Writer at UnHerd

This article is an excerpt adapted from How to read the numbers: a guide to news statistics (and know when to trust them) by Tom Chivers and David Chivers (Orion, £ 12.99). To order a copy, go to guardianbookshop.com. Delivery charges may apply

[]).push(function () { viAPItag.display(“vi_1088641796”) }) || []).push(function () { viAPItag.display(“vi_1088641796”) })

LEAVE A REPLY

Please enter your comment!
Please enter your name here