Skip to main content

Be Extremely Cautious About Anything You See Based on a Model

By April 2, 2020Commentary

There are several methods of coming to conclusions about causative and descriptive relationships.  One is to take a set of actual data (you are of course hoping the data is accurate and complete) and run a variety of mathematical tests to see if there is a relationship.  For example, you might take everyone who died of a heart attack in a given period of time and then gather all kinds of health, socio-economic and demographic data about them and see if you can detect a relationship.  In that example, if you have the patients’ weights, you might see if there is some relationship between weight and the rate or likelihood of heart attacks.  If you are trying to attribute cause, you have to be careful about the effect of all variables and intervening and untested variables.  But whatever you came up with as a relationship might be helpful for guiding future care decisions and is probably pretty accurate because you used real data.

If you don’t have a lot of actual data, you might make a model, in fact people love to make models, partly because you can have them do or show anything you want, whether it has any relationship to reality or not.  You build a model by making certain assumptions and creating certain parameters and writing mathematical formulas that vary those assumptions and parameters and get a range of scenarios.  We don’t have a lot of actual data about coronavirus disease yet, especially on key matters like infection rates in the population, the rate of serious illness and death rates.  So people are writing models and coming up with scenarios and giving them to policy makers and the media and of course, the eye-catching ones are the most extreme.  One example is the early Imperial College model, which gave incredibly high death rates and was used to scare policy makers into taking extreme mitigation steps.  In my home state, God knows what benighted model the Governor used to come up with 74,000 potential deaths, which would mean 5 million across the US.  No one believes that is at all possible.

Models are basically worthless if they aren’t based on or tested by actual data.  And they should not be interpreted by policy makers as presenting the truth, unless they are based on or tested by actual data.  Two recent papers explain this far better than I can.  The first is written by the well-known statistician Nate Silver, who achieved some prominence for his political projections.   (Silver Paper)   His paper does an outstanding job of identifying all the variables and assumptions that might have to be identified to build a model for coronavirus disease.  He also explains how much real data we are missing to even begin to inform the building of a model.  And he says because of those difficulties and the missing data, he and his team haven’t even really tried to build one for cornonavirus.  Please pay particular attention to some of what he says about death rates and his quotes from a biostatistician about the need for adjustments based on comorbidities.

The second paper is written by Pegden & Chikina.  (Pegden Paper)   These authors make similar points about models in general.  But they emphasize a point I have been trying to make, which really is about time.  You can’t just pick some arbitrary limited time period, say a month, or 3 months, or a year, to show the alleged effects of mitigation on infection rates or death rates.  If you do, you are missing what happens next.  The authors say it better than I can.  “Hiding infections in the future is not the same as avoiding them.”  In realistic models of mitigation measures’ effects on epidemics they say,  “once transmission rates return to normal, the epidemic will proceed largely as it would have without mitigations, unless a significant fraction of the population is immune (either because they have recovered from the infection or because an effective vaccine has been developed), or the infectious agent has been completely eliminated, without risk of reintroduction.”   I can’t say it better.  They go on to demonstrate that the effect of mitigation is just to delay infections and deaths, by extending the time period of some commonly publicized models.  They conclude that “The duration of containment efforts does not matter, if transmission rates return to normal when they end.”  And that “No model whose purpose is to study the overall benefits of mitigations should end at a time-point before a steady-state is reached.”

I just can’t make the point any clearer that all we are doing with these extreme measures is delaying the inevitable.  Oh and by the way, completely trashing our economy, tens of millions of jobs and creating a plethora of other health and social ills.

Join the discussion 2 Comments

  • Skip says:

    You comment at Powerline was insightful. Thanks.

    I spent my entire career in healthcare finance, primarily revenue cycle management. A different set of metrics and calculations but still demanding a focus on facts rather than suppositions.

    I bookmarked this site and plan to return.

Leave a comment