# Stuff Happens: A Statistical Guide to the “Impossible”

Why are some people struck by lightning multiple times or, more encouragingly, how could anyone possibly win the lottery more than once? The odds against these sorts of things are enormous.

In summer 1972 Anthony Hopkins was chosen to play a leading role in a film based on George Feifer's novel *The Girl from Petrovka*. Not having the book himself, he went to London to buy a copy but none of the main London bookstores had one. On his journey home, however, waiting for an underground train at Leicester Square station he saw a discarded book lying on the seat next to him. It was *The Girl from Petrovka*! The story gets even weirder. Hopkins later had a chance to meet Feifer and told him about finding the book. Feifer mentioned that in November 1971 he had lent a friend a copy of the book, one in which Feifer had made notes pertaining to the publication of an American edition, but his friend lost the book in Bayswater, London. A check of the annotations in the copy Hopkins found showed that it was the very same one Feifer's friend had mislaid!

Why are some people struck by lightning multiple times or, more encouragingly, how could anyone possibly win the lottery more than once? The odds against these sorts of things are enormous. The global financial crisis is still fresh in all our minds... despite reassurances that such an event was virtually impossible, it happened.

Stuff happens.

Humans have historically explained "impossible" occurrences in terms of the supernatural, and still do today. Charlatans in an assortment of disguises, sadly, often capitalize very profitably on chance. Happily though, science offers other ways of looking at these strange happenings and David Hand, an emeritus professor at Imperial College and past president of the Royal Statistical Society, has written an entertaining and readable book on this very subject entitled *The Improbability Principle: Why Coincidences, Miracles, and Rare Events Happen Every Day*. Anthony Hopkins' episode is cited in the opening chapter of the book.

Though this will not come as a surprise to statisticians, extraordinary events are actually commonplace and Hand quotes Persi Diaconis, a professor of Statistics and Mathematics at Stanford University and former professional magician: "The really unusual day would be one where nothing unusual happens". (Diaconis is an interesting subject in his own right. See, for example, chronicle.com/article/The-Magical-Mind-of-Persi/129404/). Looking at the evening news, what may at first glance seem to be incontrovertible evidence of an artful conspiracy and cover-up may have a more mundane explanation, such as sheer chance (or incompetence on the part of the authorities). There is also the point of view that our existence itself is a highly improbable event.

Why bizarre things happen with regularity has both mathematical and human aspects to it, and Hand very neatly decomposes the mystery into several laws which I will summarize here very briefly.

**The law of inevitability:** Something *must* happen. If we make a complete list of all possible outcomes then one of them *must* occur, even if each has a tiny probability of occurring. Hand observes, and rightly so, that this law is so obvious we often fail to notice it.

**The law of truly large numbers: **With a sufficiently large number of opportunities, any extreme event is likely to happen. If we toss a coin enough times we will get seemingly impossible streaks of heads or tails, and "hot hands" in sports such as basketball are probably only statistical artifacts. It turns out that the star of the 2010 FIFA World Cup, Paul the Octopus, would have had a less-than-astronomical 1/256 chance (1/28) of predicting all eight of his matches correctly even lacking his psychic powers. Around the globe, many other would-be stars of the animal kingdom fared poorly in their World Cup predictions.

Data dredging, unfortunately, is common practice in marketing research and not unheard of in "hard" science. Even when significance testing is used, looking at hundreds or even thousands of patterns is nearly guaranteed to turn up a nugget or two, a nugget that may turn out to be fool's gold. In the era of Big Data we will need to exercise even more vigilance against being deceived by chance events.

**The law of very small numbers: **There is also a law of very small numbers that researchers in many fields fall prey to by drawing sweeping and dramatic conclusions that cannot be replicated from small samples. This is another way of capitalizing on chance, unwittingly or not.

**The law of selection: **We can make probabilities as high as we like if we choose *after*the event. This is where the human side of the riddle begins to emerge. Put another way we can pick winners after they've won, often unknowingly. In Hand's words when discussing hindsight bias: "...*once the future has become the past*, then it's easy to look back and see the paths which led to it." Some people, predictably, are devious and in effect paint the target after they've shot the arrows. Policymakers and assorted experts can be very skillful at using the law of selection. Related to this is "harking" - Hypothesizing After the Results are Known - which surfaces in any discipline. To quote Hand once again, "...researchers might sift through the data, observe a hint of a trend in a particular direction, and then carry out a more elaborate statistical analysis and test *of the same data* to see if the trend is significant. But any conclusion will be distorted by the initial observation of a trend."

There is also something known as overfitting that all statisticians must watch out for, and a nice definition of it is given by Wikipedia: "In statistics and machine learning, overfitting occurs when a statistical model describes random error or noise instead of the underlying relationship. Overfitting generally occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. A model which has been overfit will generally have poor predictive performance, as it can exaggerate minor fluctuations in the data." If we are don't take this phenomenon into account when building a model, we may find ourselves impressing our client with a model that fits historical data wonderfully but that subsequently performs sub-par when applied to new data.

**The law of the probability lever:** A slight change in circumstances can have a huge impact on probabilities. Think of yourself driving a car and how little decisions you make along the way can have a big impact on how long it takes to get to your destination. Statisticians will spot the connection to multivariate modeling, in which minute changes in coefficient estimates can be very consequential. Returning to financial crises, the normal distribution has arguably been overused to assess risk and Hand cites the example of a 5-sigma event, which has a theoretical 1 in 3.5 million probability of occurring. This shrinks to a less spectacular 1 in 16 chance if a fatter-tailed Cauchy distribution is used in place of the normal.

**The law of near enough: **We often regard events that are similar as *identical* and sometimes, consciously or not, adjust our "predictions" after the fact. Once again, policymakers and assorted experts, including marketing researchers, can use this law to their advantage, intentionally or otherwise.

*The Improbability Principle* and statistical science offer a myriad of handy tips - far too many for this short article - and I'll note just a few more. We should recall that random does not mean even and, in fact, a very regular pattern suggests a *lack* of randomness. Regression to the mean is another phenomenon that seems to show up nearly everywhere. An example pertinent to marketing research is when we test a large number of product concepts; by chance alone some of the ones scoring exceptionally well would do less well if tested again and some of the ones that performed very poorly would do better the next time around.

With tongue in cheek, Hand suggests that a soothsayer's manual would include three fundamental principles:

- Use signs no one else can understand;
- Make all your predictions ambiguous; and
- Make as many different predictions as you possibly can.

I note that the foregoing would also be sound advice for aspiring business pundits. Hand contrasts this with the scientific method, in which we are required to:

- Describe our measurement process clearly so that others know exactly what we have done; and
- Give clear descriptions of what our scientific hypothesis implies, so we can see when it's giving incorrect predictions.

One evening, as I was finishing the final chapter of *The Improbability Principle*, I suddenly thought of a cousin of mine who I hadn't heard from for a couple of years. There was nothing special about the date and his work has no direct connection with statistics or with marketing research. Out of the blue, I just found myself wondering how he was doing. The following morning there was an email from him waiting in my inbox, wondering how *I* was doing.

Stuff happens.

**Bio: Kevin Gray** is president of Cannon Gray, a marketing science and analytics consultancy.

Original. Reposted with permission.

**Related:**