COVID-19 has proven to be a brutal flu for those unlucky enough to get it. Any new disease that's successfully left animals behind for human-to-human transmission is highly risky. Nevertheless, the current insanity gripping the world is based upon highly dubious computer models, making those models as dangerous as a virus.
On March 9, MSNBC's Chris Hayes attacked Trump for trying to manipulate downward the expert projections about COVID-19 deaths. One month later, Hayes was back, attacking Trump for intentionally manipulating the numbers upward.
Hayes is an unprincipled moron who will say anything to hurt Trump, but his monomania highlights how very wrong the original high-death predictions have proven to be. The Washington Post’s Philip Bump tried to explain away one set of wildly inflated predictions by saying the experts were accurate but that later behavioral changes caused lower numbers.
The mother of all modeling when it comes to COVID-19 is Neil Ferguson of Imperial College London. It was he who first said 500,000 people in England would die, and another 2.2 million in America unless drastic steps were taken. Then, when both countries panicked, he came out with a new, downgraded model (one that still overstates the reality).
Ferguson was presented to the world as one of the world's foremost epidemiologists and modelers. Perhaps we should have learned more about him before accepting that claim. With help from Bill Steigerwald, Power Line has an exposé that gives us more information about Ferguson. Full Story with Graphics - Andrea Widburg - American Thinker
Useful vs. Useless COVID-19 Models: A Response to the Armchair Analysts
There's been a lot of discussion by armchair analysts about various models being used to predict outcomes of COVID-19. The armchair analysts I've seen include a philosophy major and a Ph.D. candidate with little experience in statistics, much less in modeling complex systems. In fact, the discussion coming from academia and its sycophants in the media further demonstrates just how deep the "Deep State" runs. For those of us who have built statistical models, all of this discussion brings to mind George Box's dictum: "all models are wrong, but some are useful"...or useless, as the case may be.
The problem with data-driven models, especially when data are lacking, can be easily explained. I'll start with a brief background on statistical analysis (AKA hypothesis testing). First of all, in terms of helping decision-makers make quality decisions, statistical hypothesis testing and data analysis make for just one tool in a large toolbox, and it's based on what we often call reductionist theory. In short, the tool examines parts of a system and then makes inferences as to the whole system. Full Story - John McCloskey - American Thinker
Coronavirus Modeling Had Faulty Assumptions Real Data Gives Us Hope
My dear old statistics teacher used to say that relying on any model, however good but founded on past data, was like driving by looking at the rearview mirror; fine as long as the future looked like the past. As governments struggle with their response to the COVID-19 pandemic, the $64 trillion question is "which past does the future look like?"