George Rebane
In ‘Testing’s Tower of Babel’ I promised to inform readers of the reliability of testing in order to determine the fraction of a target population that has a defined attribute. Today the whole country is supposed to be interested in the extent of the C19 infection, and also the extent of the recovery from the disease. Such test results are supposed to inform government policy makers about the stage of the pandemic, and whether to respond with more draconian quarantines or begin reviving the nation’s economy – important information indeed.
Let me begin with the punchline of this treatise – to determine a population fraction is more complex than people (and policy makers) are led to believe, and the ‘naïve’ approach of just taking the number of those testing positive from a random sample and dividing that by the size of the sample gives answers that are significantly in error. That is apparent by just recognizing that a real test is unreliable, having less than perfect sensitivity – it misses infected people – and also specificity – it counts some uninfected as infected. Such a test will give a wrong answer for the fraction infected when using the naïve approach on a tested random sample from the target population.
That motivated me to take a closer look, and after pushing some squigglies, the solution to calculating the correct fraction of infected emerged. As expected, the formula involves sample size, and the test’s reliability parameters. All such theoretical developments need confirmation, so I wrote a program to simulate real test results that takes into account errors in all parameter values involved in such testing. The results were heartening, the derived correction formula nails the actual infected fraction, its confidence bounds are easily computed from the relevant input error distributions.
I provide the supporting details for all this in a technical note that you can download here – Download TN2004-2
As a teaser for some more noodling about assessing the state and progress of the C19 pandemic, we read that it’s not only the Chinese who have been lying through their teeth about what happened in Wuhan and what is still going on in the country. Our own mavens of morbidity have not been all that professional and meticulous about the data they publish. Here is a piece in the American Thinker that exposes our CDC which now confesses to having lied about C19 death stats. In ‘The CDC Confesses to Lying About COVID-19 Death Numbers’ we read, “How many people have actually died from COVID-19 is anyone’s guess … but based on how death certificates are being filled out, you can be certain the number is substantially lower than what we are being told. Based on inaccurate, incomplete data people are being terrorized by fearmongers into relinquishing cherished freedoms.” (H/T to reader)
This also has a bearing on testing. When composing a random sample, how do we treat those in the sample who have recently died of C19 (or something else) or have already been diagnosed with C19? Different approaches will inform different policy decisions. Our media are not smart enough to either understand or report on any of these considerations. And does anyone really care about such details, beyond just using their misunderstandings to cast aspersions at each other from their ideological perches?
Now for the downside on the call for all this testing. No one knows how reliable the tests are, especially the new ones. (more here) Somehow all of that has passed by our entire news industry, showing again what kind of pikers they are. Since we know neither the average sensitivity and specificity of the tests, nor their probabilistic dispersions, all of these calls for testing are nothing short of an exercise in politics and emotions. Has anyone even heard Drs Fauci or Brix bringing up test reliability, or showing even a smidgen of concern about that overarching reality in the use of tests and testing???? We'll see what kind of interest RR readers show in this issue.


Leave a comment