# When Good Advice Goes Bad

Consider these 4 examples of good statistical advice which, when misused, can go bad.

**By Andrew Gelman**.

Here are some examples of good, solid, reasonable statistical advice which can lead people astray.

### Example 1

**Good advice**: Statistical significance is not the same as practical significance.

**How it can mislead**: People get the impression that a statistically significant result is more impressive if itâ€™s larger in magnitude.

**Why itâ€™s misleading**: See this classic example where Carl Morris presents three different hypothetical results, all of which are statistically significant at the 5% level but with much different estimated effect sizes. In this example, the strongest evidence comes from the smallest estimate, while the result with the largest estimate gives the weakest evidence.

### Example 2

**Good advice**: Warnings against p-hacking, cherry-picking, file-drawer effects, etc.

**How it can mislead**: People get the impression that various forms of cheating represent the main threat to the validity of p-values.

**Why itâ€™s misleading**: A researcher who doesnâ€™t cheat can then think that his or her p-values have no problems. They donâ€™t understand about the garden of forking paths.

### Example 3

**Good advice**: Use Bayesian inference and youâ€™ll automatically get probabilistic uncertainty statements.

**How it can mislead**: Sometimes the associated uncertainty statements can be unreasonable.

**Why itâ€™s misleading**: Consider my new favorite example, y ~ N(theta, 1), uniform prior on theta, and you observe y=1. The point estimate of theta is 1, which is what it is, and the posterior distribution for theta is N(1,1), which isnâ€™t so unreasonable as a data summary, but then you can also get probability statements such as Pr(theta>0|y) = .84, which seems a bit strong, the idea that youâ€™d be willing to lay down a 5:1 bet based on data that are consistent with pure noise.

### Example 4

**Good advice**: If an estimate is less than 2 standard errors away from zero, treat it as provisional.

**How it can mislead**: People mistakenly assume the converse, that if an estimate is more than 2 standard errors away from zero, that it should be essentially taken as true.

**Why itâ€™s misleading**: First, because estimates that are 2 standard errors from zero are easily obtained just by chance, especially in a garden-of-forking paths setting. Second, because even with no forking paths, publication bias leads to the statistical significance filter: if you only report estimates that are statistically significant, youâ€™ll systematically overestimate effect sizes.

### More examples?

Maybe you could supply additional examples of good statistical advice that can get people in trouble? I think this is a big deal.

**Bio: Andrew Gelman** is a professor of statistics and political science and director of the Applied Statistics Center at Columbia University. Andrew has done research on a wide range of topics, including: why it is rational to vote; why campaign polls are so variable when elections are so predictable; why redistricting is good for democracy; reversals of death sentences.

Original. Reposted with permission.

**Related**:

- Plausibility vs. probability, prior distributions, and the garden of forking paths
- Predictive Power of Terror Alerts and Monkeys
- Amazon Top 20 Books in Statistics