The Outstater

April 28, 2020

“There is, to put it mildly, a huge spread (in the predictions) — the difference between a death toll on par with the number of people who die from injury and violence annually in the U.S. and one that’s closer to the number of people murdered when the Chinese communists moved to suppress counterrevolutionaries between 1950 and 1953.” — “Why Is It So Hard to Make a COVID-19 Model?” FiveThirtyEight, March 31, 2020

THE VIRUS IS TEACHING US that there’s such a thing as bad data. I’m not sure we knew that before. Indeed, for decades any sort of data has been unquestioningly turned into headlines, not to mention the junk numbers spewing out of a polling industry that had descended into sophistry.

“Women Found to Be Safer Drivers than Men,” is an early example in my files. The Associated Press report, smirkingly cited by the woke as science debunking misogyny, was bunk itself. The report ignored that men at the time were driving twice as many miles as were women.

Recently, and in regard to the pandemic, our adjunct Ken Bisson, a physician, warned that the data on COVID-19 testing must be read in careful context before being set in a headline, let alone put into public policy, especially that seeking to override a free economy.

“The best laboratory test we have for identifying the SARS-CoV-2 virus misses 70 percent of all infected people tested when used for anyone other than the most ill,” he noted by way of example.

And what kind of data do you collect when cash-strapped hospital administrators learn they can charge double if they count “probable” deaths, those who die incidental to testing positive for the virus, the same as they count those who die solely of it? In summary, we have begun to notice that all of this — the data — varies widely, and without an explanation equal to the seriousness of the situation.

None of this has slowed the headline writers. Our emotions are tossed this way and that each day as modern journalism twists the numbers to fit the narrative of moment in this “war” against a virus.

War? Bad data certainly can be fatal, if that is what’s meant. Another adjunct of our foundation, the late Norman Van Cott, an economist, addressed this point several years ago in warning that the government’s inability to gather data correctly may be the difference between chasing enemies and being surrounded by them — that is, to lose an actual battle.

So it was for the 7th Cavalry at the Battle of the Little Bighorn in 1785. We now know that Gen. George Armstrong Custer, the historic face of white racist arrogance, may or may not have respected native American warriors but he almost certainly was given bad data on them.

Van Cott, writing in the Journal of Economic Education, noted that a primary source of military intelligence for the U.S. Army at the time was the count of Native Americans on reservations. The more warriors on the reservations should have meant fewer out on warpaths.

“But who counted the Indians?” Van Cott wanted to know, a question repeated so often here that it has become an office trope.

The answer, according to a historian of the battle, Evan Connell, was government experts — agents paid by the number of Native Americans they counted, a processing error that would cost General Custer and his men their scalps:

“Connell reports that reservation agents’ salaries varied directly with reservation populations. This provided an incentive for the agents to overstate the count. In Connell’s words, ‘ . . . an agent foolish enough to report a decrease in population was taking a bite out of his own paycheck.’”

The agents “counted” 37,391 Native Americans on reservations before the battle but later only 11,660 could be found there. Custer thought he was running to ground a relatively small party of warriors when in fact he was facing three times as many.

In our current battle, this pandemic, will government get the data straightened out in time to organize its forces?

Keep washing your hands. — tcl



Comments...

Leave a Reply