*A review of over 27,000 forecasts showed that the experts were worse than statistical models. In fact they could barely eke out a tie with the proverbial dart-throwing chimps.*

**New York Times, By KATHRYN SCHULZ, March 25, 2011**

What does the future hold? To answer that question, human beings have looked to stars and to dreams; to cards, dice and the Delphic oracle; to animal entrails, Alan GreenĀspan, mathematical models, the palms of our hands. As the number and variety of these soothsaying techniques suggest, we have a deep, probably intrinsic desire to know the future. Unfortunately for us, the future is deeply, intrinsically unknowable.

This is the problem Dan Gardner tackles in "Future Babble: Why Expert Predictions Are Next to Worthless, and You Can Do Better." Gardner, a Canadian journalist and author of "The Science of Fear," takes as his starting point the work of Philip Tetlock, a professor at the University of Pennsylvania. Beginning in the 1980s, Tetlock examined 27,451 forecasts by 284 academics, pundits and other prognosticators. The study was complex, but the conclusion can be summarized simply: the experts bombed. Not only were they worse than statistical models, they could barely eke out a tie with the proverbial dart-throwing chimps.

*... Isaiah Berlin distinguished between two types of thinkers: *
"The fox knows many things, but the hedgehog knows one big thing." Berlin admired both ways of thinking, but Tetlock borrowed the metaphor to account for why some experts fared better. The least accurate forecasters, he found, were hedgehogs: "thinkers who 'know one big thing,' aggressively extend the explanatory reach of that one big thing into new domains" and "display bristly impatience with those who 'do not get it,' " he wrote. Better experts "look like foxes: thinkers who know many small things," "are skeptical of grand schemes" and are "diffident about their own forecasting prowess."

To his credit, Gardner is a fox. His book, though, is somewhat hedgehoggy. It knows one big thing: that the future cannot be foretold, period, and that those who try to predict it are deluding themselves and the rest of us.

... I want to like this book, because I share Gardner's values and am sympathetic to his project. And clearly, skepticism and intellectual humility need all the champions they can get. But while "Future Babble" pays appropriate homage to the mysteries of the future, it gives short shrift to both the science of the human mind and the richness of the human experience.

Read **more**.

Excerpt: 'Future Babble' (Google Books)

But then I realized something. The phrasebetter than expectedmeans that there was a forecast that preceded this one, and by issuing a new and different forecast, the OECD was conceding that the latest information suggested the earlier forecast was wrong. If the first forecast could fail, so could the second, and yet I reacted to the second forecast as if it were a sure thing.

**Comments**

**Gregory Piatetsky **

A good lesson for data miners is to make clear the assumptions on which predictions are based. For example, predictions based on large number of similar data have been shown to produce good results, as long as the economic environment is similar - example predicting behavior of a millions of customers based on past behavior of millions of similar customers. However, macro-economic forecasts frequently fail because the underlying assumptions are complex and frequently unknown until they change.

Mar 27, 2011

**Saed Sayad**

The number of possible outcomes and the complexity of the system that generates those outcomes are two key factors. If the number of outcomes is one, the prediction is obvious regardless of the complexity of the system. If the system is random, the prediction will be random regardless of the number of outcomes (at least two outcomes e.g., flip a coin).

March 28, 2011