Skip to main content

Minnesota’s Modeling Nightmare

By October 3, 2023Commentary

Like a lot of places Minnesota relied on bad modeling in part as a justification for stupid, futile, extremely damaging virus suppression measures.  Unlike a lot of other places, Minnesota wasted money on its very own bad model, awful in fact.  Regular readers may recall the series of posts I wrote critiquing the model.  I offered multiple times to provide feedback and aid to an obviously inept modeling group, the Z team of modeling if you will, but they didn’t need any help.  Governor Walz, the Incompetent Blowhard, first pimped the model and bragged on it as though he wrote it himself, and maybe he did, it is that bad.  He deleted, or attempted to delete his early press conferences in which he attributed his actions to close schools, businesses and lock people indoors on the model.

Now we learn the state will spend $17 million determining what went wrong with the model.  Be happy to share my insights and ideas for a better model for free.  So I wrote an op-ed for the Star Tribune on the topic.  The Strib published several of my epidemic op-eds before it became an overt arm of the Democrat Party.  They said they would pass on this one.  Criticism of Walz is verboten under Fuhrer Grove’s reign.  But Powerline very graciously published it and you can read it below as well.  I stand by every word.


We learned this week that the State of Minnesota intends to use a $17 million dollar grant to study why the model developed by the state for use during the CV-19 epidemic was a mere million miles from reality. You will recall that under the model as many as 50,000 Minnesotans would die and many more be hospitalized and those predictions were utilized as the basis for closing schools and businesses, forcing people to stay indoors and visiting numerous other futile and damaging viral suppression measures upon the citizenry. Those suppression measures have done immense damage to Minnesotans, far more than the epidemic itself, particularly to children.

Governor Walz repeatedly bragged about the model and how unique we were in developing it to guide policy. He has scrubbed his press conferences from YouTube and other sources so they couldn’t be used against him, but once something is on the internet, it is always there. And he clearly used the model to justify the shutdowns and his accompanying terror campaign of unrelenting fearmongering, which led to anxiety, depression, greater drug and alcohol use and most importantly, missed health care that will cause deaths and more serious disease for years.

The model was built by a particularly inept team, which seemed to not have a grasp on basic modeling or statistical principals. At one point this paper ran a story about how wonderful it was that graduate students at the University of Minnesota were participating in the project. That should have been a clear warning sign of impending disaster. In fairness, many other models were equally poorly constructed, but there was a small group of independent statisticians and epidemiologists who early on created models which proved to be far more accurate.

I and others attempted to repeatedly point out to the modeling team the major flaws in their model, but we were either ignored or told that the team knew what it was doing. I ran a whole series of posts on the flaws and how the model could be improved. Good models in health care are all built on large databases of past information. A common example is identifying the best treatment patterns for a disease. You can take large claim and electronic medical records of the care delivered by many physicians in different ways to a huge number of patients, examine outcomes and extract a model which says the best method for treating this disease for this type of patient is X.

An early epidemic model by definition does not have a large database of past events. A common truism about models is that they only tell the modeler what the modeler tells them to tell him or her. Lacking sufficient data, the Minnesota team essentially made up parameters and inputs. And they made incredibly basic errors. It is common knowledge that in an infectious disease epidemic the people most likely to become infected and to experience serious disease are those in poor health condition, whether due to age or pre-existing disease. So the early part of an epidemic will see disproportionately high rates of hospitalization and death. Not accounting for this is a failure to do the fundamental check that the population sample for the data you are using is representative of the population as a whole. The virus didn’t “sample” or infect the population randomly, it infected certain parts of the population in a highly preferential manner.

The failure to account for extensive variation in the susceptibility of the population to infection, in viral load, in transmission circumstances, and other crucial parameters led to a model that was far too simplistic and not concordant with the reality of the epidemic as it unfolded. And again, in fairness to the team, much of the data critical to building a good model simply wasn’t well understood, including basic facts about the infectivity and survivability of the virus itself, and viral transmission dynamics, something we still don’t have a clear picture of.

I would not have a high confidence level in getting any better model by using essentially the same group and approach that produced the first disaster. The state would spend its money far better by engaging those groups which did produce better models and by involving people who demonstrated innovative approaches. But we also should recognize how foolish it is to determine public policy based on models, particularly in the early phase of the epidemic. I would hope that would be a key lesson from the now widely-acknowledged failure of shutdowns, school closures, social distancing, plastic barriers, masking and other measures.

Leave a comment