Update. Right after I published this I thought to go look at a weekly report from the state. Supposedly the testing and positivity charts there are based on date of test, but in one place the chart is still labeled as being by date of report. I have asked the state for clarification and if possible to get the table underlying the positivity chart. That would simplify things. Looking at that positivity chart, it looks very stable for the last few weeks. And again, look at what a difference the mask mandate has made. (Mn. Data)
Minnesota’ inconsistency in reporting of case and test data is just one area that gives me fits. Cases are reported by date of test result, in a table. Tests are only given to us in a table by date reported. So the two don’t match up. I am trying to figure out a method to “normalize” or adjust to try to back into some way of getting a sense of what is actually happening with cases related to tests. I welcome any suggestions. I started with June 1, because around then we started seeing a big jump in testing. Through September 16, 1,455,791 tests were done since June 1. (Some people were tested more than once.) In the same time period, there were 58,747 cases reported. (Positive results, for that is worth now that we know measuring issues, and this is unique individuals.) Interestingly, the overall positivity rate in this time period is about 4%. 107 days in the time period, so 13,605 tests per day on average and 549 cases reported per day on average.
There are enormous swings in the testing volume by day and even by week. Assuming a relatively steady positive rate, the volume of tests is going to have a dramatic effect on the number of cases. Not being able to precisely match tests and cases by day hinders true understanding of trends, but you can clearly see the influence of number of tests on number of positives. The week ending September 13 isn’t the best week to use only because cases will still be coming in from future reporting days, and that has the Labor Day influence as well, but we can include it just to highlight how cases vary with testing in a relatively steady positive rate environment. For the week ending September 13, there were 98120 tests reported, for the week ended September 6, there were 121276 tests. For the week ended August 30, 104968 tests were reported. And for the week ended August 23, 121866 tests.
For the same weeks, as of September 16, there were 3569 cases for days in the week ended September 13 (again, probably some lag on cases still), 4438 on days in the week ended September 6, 5242 for days in the week ended August 30, and 4750 for the days in the week ending August 23. The positivity rates were 3.64%, 3.66%, 4.99%, and 3.9%; starting with the week ending August 23 forward. The jump in positivity in the week ended August 30 is an outlier and is almost certainly due to a problem we have regularly had where some labs are reporting results slowly, especially negative results. You may recall the 30,000 test dump on one day in August, with almost all the results being negatives. So I suspect there are a bunch of unreported negative results waiting to be dumped on a day.
But another way to think about it is trying to adjust the tests by date of report for a lag. The Health Department in a briefing said it takes about a week for reports for a week to be pretty complete. I have been tracking this for over a month and that is about right. So now, let’s take the cases against the tests reported a week later. We have to ignore the week ending September 13 now. But for the week ending September 6, we get 4.5% positivity; for the week ending August 30, it is 4.32%, for the week ending August 23 it is 4.52% and for the week ending August 16, which had 4348 cases, the positivity rate was 3.56%. Now the week ending August 23 is the week that had the big test reporting dump, so that positivity rate for the lagged week of August 16 is largely due to that factor. Again shows the importance of being able to match test date and case date. But for the other weeks, you see the very steady positivity rate, which suggests to me that you could normalize cases around that positivity rate. In other words, pick a date, assume that an average number of tests had been performed and multiply that by around 4.3% to 4.5%, and you will get the number of cases that would have been found to exist if the same number of tests had been done.
In the absence of data from the state allowing us to match test date and positive result date, this is about as good as I can figure out to adjust for testing volume. If cases look lower for a week, it is likely because testing was lower, and vice versa. Again, welcome any thoughts.