KDnuggets Home » News » 2018 » May » Opinions, Interviews » Frequentists Fight Back ( 18:n22 )

Frequentists Fight Back


Frequentist methods are sometimes described as “classical”, though most have only appeared in recent decades and new ones are under development as you read this. Whatever we call it, this branch of statistics is very much alive.



Header image

Frequentist-leaning statisticians have numerous responses to Bayesian criticisms that may not be widely known. Broadly speaking, these rebuttals assert that Bayesian criticisms of Frequentist approaches rely on circular arguments, are self-refuting, rest mostly on semantics, or are mainly of interest to academics and irrelevant in practice. Below, I've briefly summarized the ones I’m aware of from memory and in my own words.

What is "Bayesian"? The meaning of the term is often unclear. Is it objective Bayes, subjective Bayes, approximate Bayes, empirical Bayes, or all of the above? Are Bayesian networks Bayesian? Bayesian methods are complicated and confusing even to academic statisticians!

In the "true" Bayesian approach, priors should be set before looking at data. However, an important commandment of statistics is that we should look very closely at the data before analyzing it in depth in order to familiarize ourselves with it, as well as to clean it and set it up for our final analyses. Exploratory modeling is also common at this stage and sometimes essential. If our exploratory data analysis suggests our earlier choice of priors was misguided, do we simply ignore this?

The criticism that Frequentism is especially hard to explain is puzzling. Few people find anything about statistics intuitive. Moreover, explanations such as "if this research were conducted 100 times in exactly the same way, we would expect a difference this large or larger fewer than 5 times" are just bad explanations. It is not mandatory that Frequentism be explained in an obscure way. To put the shoe on the other foot, is "we generated 20,000 MCMC samples and discarded the first 10,000 as burn-in" clear to most people?

When comparing results purporting to demonstrate the superiority of Bayesian methods, mountains are sometimes made of rounding error.

Bayesian statistics treats the data as fixed whereas it usually is a sample and subject to sampling error. The data used affects results obtained from Bayesian methods too, though it is sometimes implied that, in contrast to Frequentism, it does not.

It is said that Bayesian statistics allows statisticians to be explicit about their uncertainty. But, typically we’re uncertain about our uncertainty and therefore cannot be explicit.

For many projects, statisticians do not have sufficient information to set priors and must use noninformative priors if they go the Bayesian route. In these cases, Bayesian and Frequentist solutions typically yield comparable results, though Bayesian approaches are normally more time-consuming. So, what’s the point in using Bayes?

In any discipline knowledge accumulates over time and when designing, executing and interpreting new studies, researchers should thoroughly acquaint themselves with previous research pertinent to the questions they are investigating. That said, unless the objective is to reproduce or replicate earlier findings, they should strive for independence and avoid contaminating their findings with findings of other researchers. In short, there is a difference between doing one's homework and potentially cooking the results. That is a line we should not cross.

Related to this last point, if the choice of priors substantially affects the model results, then there may be a problem with the data - either the sample is too small or the data insufficiently informative to draw important conclusions. So, once again, why bother with Bayes?

With maximum likelihood estimation it is often possible to specify starting values for the estimation and to constrain estimates (e.g., that a price coefficient is always negative). While not truly Bayesian, the capabilities of Frequentist approaches are not always fully acknowledged.

Frequentist Confidence Intervals and Bayesian Credible Intervals are often very similar. If decision makers misconstrue Frequentist results as Bayesian, how seriously would it affect their decisions? Does it really matter outside the Ivory Tower?

Bayesian posteriors can be overinterpreted - we generally don't have the data or model sufficient to interpret the posteriors as precisely as some Bayesians claim we can (e.g., the probability that a coefficient lies within a very narrow range).

Most decision makers are interested in point estimates, not a range of estimates (e.g., Credible Intervals). In fact, using the word estimate itself, let alone "uncertainty," can lead some to doubt the value of statistics itself. Decision-makers are generally uncomfortable with uncertainty.

Frequentist statistics is not the same as Null Hypothesis Significance Testing (NHST), and the two terms should not be used interchangeably. Many Frequentists, in fact, have sharply criticized NHST and significance testing in general over the years. R.A. Fisher, for one, was extremely vocal in his disapproval of NHST, describing it as "childish." When reminded of this Bayesians often respond that he wasn’t truly a Frequentist since he laid the groundwork for maximum likelihood estimation (MLE). Many Frequentist methods draw heavily upon MLE, however, so this argument implies no one is a Frequentist.

Bayesian critics of Frequentism sometimes generalize from misuse of Frequentist methods. This is not only unfair but also raises questions regarding the credibility of those employing this tactic.

There are some highly complex problems that are best handled with Bayesian methods, and others that can only be handled in a Bayesian fashion. There are also problems that have rich prior information that should be formally incorporated into the modeling. But many statisticians will never encounter these sorts of problems and we should not generalize from these exceptions.

The null hypothesis does not necessarily mean a difference between treatment and control that is exactly zero or a regression coefficient that is exactly zero, for instance. These would be examples of what some call the "nil hypothesis," which is nearly always false.

The oft-cited breast cancer screening example purporting to show that Bayesians get the right answer and Frequentists the wrong one is disingenuous. First, Bayes' theorem is not something that has been recently re-discovered - it's been used by statisticians for well over a century. More to the point, fancy calculations are unnecessary - it's schoolboy math.

In the interest of disclosure, I use both Frequentist and Bayesian statistics and am not wedded to either, and have no commercial interests in promoting one or the other. However, some of the criticisms put forth by Bayesian proponents are indeed weak, and occasionally silly as alleged, and this has held back its diffusion into the mainstream, in my opinion.

 
Bio: Kevin Gray is president of Cannon Gray, a marketing science and analytics consultancy.

Original. Reposted with permission.

Related: