Science fails to face the shortcomings of statistics
Science News, By Tom Siegfried, March 27th, 2010; Vol.177 #7 (p. 26)
For better or for worse, science has long been married to mathematics. Generally it has been for the better. Especially since the days of Galileo and Newton, math has nurtured science. Rigorous mathematical methods have secured science's fidelity to fact and conferred a timeless reliability to its findings.
During the past century, though, a mutant form of math has deflected science's heart from the modes of calculation that had long served so faithfully. Science was seduced by statistics, the math rooted in the same principles that guarantee profits for Las Vegas casinos. Supposedly, the proper use of statistics makes relying on scientific results a safe bet. But in practice, widespread misuse of statistical methods makes science more like a crapshoot.
It's science's dirtiest secret: The "scientific method" of testing hypotheses by statistical analysis stands on a flimsy foundation. Statistical tests are supposed to guide scientists in judging whether an experimental result reflects some real effect or is merely a random fluke, but the standard methods mix mutually inconsistent philosophies and offer no meaningful basis for making such decisions. Even when performed correctly, statistical tests are widely misunderstood and frequently misinterpreted. As a result, countless conclusions in the scientific literature are erroneous, and tests of medical dangers or treatments are often contradictory and confusing.
This post has prompted rebuttal and discussion on AnalyticBridge
I think there are many problems with use of statistical methods, especially in drug research where the incentive for finding something, even with flimsy evidence, are huge. However, it is not a problem of statistics, but of incentives and training.
After all, we don't blame car crashes on physics, but on how people drive cars!