# Solved – Distribution hypothesis testing – what is the point of doing it if you can’t “accept” your null hypothesis

Various hypothesis tests, such as the \$chi^{2}\$ GOF test, Kolmogorov-Smirnov, Anderson-Darling, etc., follow this basic format:

\$H_0\$: The data follow the given distribution.

\$H_1\$: The data do not follow the given distribution.

Typically, one assesses the claim that some given data follows some given distribution, and if one rejects \$H_0\$, the data is not a good fit for the given distribution at some \$alpha\$ level.

But what if we don't reject \$H_0\$? I've always been taught that one cannot "accept" \$H_0\$, so basically, we do not evidence to reject \$H_0\$. That is, there is no evidence that we reject that the data follow the given distribution.

Thus, my question is, what is the point of performing such testing if we can't conclude whether or not the data follow a given distribution?

Contents

Broadly speaking (not just in goodness of fit testing, but in many other situations), you simply can't conclude that the null is true, because there are alternatives that are effectively indistinguishable from the null at any given sample size.

Here's two distributions, a standard normal (green solid line), and a similar-looking one (90% standard normal, and 10% standardized beta(2,2), marked with a red dashed line):

The red one is not normal. At say \$n=100\$, we have little chance of spotting the difference, so we can't assert that data are drawn from a normal distribution — what if it were from a non-normal distribution like the red one instead?

Smaller fractions of standardized betas with equal but larger parameters would be much harder to see as different from a normal.

But given that real data are almost never from some simple distribution, if we had a perfect oracle (or effectively infinite sample sizes), we would essentially always reject the hypothesis that the data were from some simple distributional form.

As George Box famously put it, "All models are wrong, but some are useful."

Consider, for example, testing normality. It may be that the data actually come from something close to a normal, but will they ever be exactly normal? They probably never are.

Instead, the best you can hope for with that form of testing is the situation you describe. (See, for example, the post Is normality testing essentially useless?, but there are a number of other posts here that make related points)

This is part of the reason I often suggest to people that the question they're actually interested in (which is often something nearer to 'are my data close enough to distribution \$F\$ that I can make suitable inferences on that basis?') is usually not well-answered by goodness-of-fit testing. In the case of normality, often the inferential procedures they wish to apply (t-tests, regression etc) tend to work quite well in large samples – often even when the original distribution is fairly clearly non-normal — just when a goodness of fit test will be very likely to reject normality. It's little use having a procedure that is most likely to tell you that your data are non-normal just when the question doesn't matter.

Consider the image above again. The red distribution is non-normal, and with a really large sample we could reject a test of normality based on a sample from it … but at a much smaller sample size, regressions and two sample t-tests (and many other tests besides) will behave so nicely as to make it pointless to even worry about that non-normality even a little.

Similar considerations extend not only to other distributions, but largely, to a large amount of hypothesis testing more generally (even a two-tailed test of \$mu=mu_0\$ for example). One might as well ask the same kind of question – what is the point of performing such testing if we can't conclude whether or not the mean takes a particular value?

You might be able to specify some particular forms of deviation and look at something like equivalence testing, but it's kind of tricky with goodness of fit because there are so many ways for a distribution to be close to but different from a hypothesized one, and different forms of difference can have different impacts on the analysis. If the alternative is a broader family that includes the null as a special case, equivalence testing makes more sense (testing exponential against gamma, for example) — and indeed, the "two one-sided test" approach carries through, and that might be a way to formalize "close enough" (or it would be if the gamma model were true, but in fact would itself be virtually certain to be rejected by an ordinary goodness of fit test, if only the sample size were sufficiently large).

Goodness of fit testing (and often more broadly, hypothesis testing) is really only suitable for a fairly limited range of situations. The question people usually want to answer is not so precise, but somewhat more vague and harder to answer — but as John Tukey said, "Far better an approximate answer to the right question, which is often vague, than an exact answer to the wrong question, which can always be made precise."

Reasonable approaches to answering the more vague question may include simulation and resampling investigations to assess the sensitivity of the desired analysis to the assumption you are considering, compared to other situations that are also reasonably consistent with the available data.

(It's also part of the basis for the approach to robustness via \$varepsilon\$-contamination — essentially by looking at the impact of being within a certain distance in the Kolmogorov-Smirnov sense)

Rate this post