# Solved – Using p-value to compute the probability of hypothesis being true; what else is needed

Question:

One common misunderstanding of p-values is that they represent the probability of the null hypothesis being true. I know that's not correct and I know that p-values only represent the probability of finding a sample as extreme as this, given that the null hypothesis is true. However, intuitively, one should be able to derive the first from the latter. There must be a reason why no-one is doing this. What information are we missing that restricts us from deriving the probability of hypothesis being true from p-value and related data?

Example:

Our hypothesis is "Vitamin D affects mood" (null hypothesis being "no effect"). Let's say that we perform an appropriate statistical study with 1000 people and find a correlation between mood and vitamin levels. All other things being equal, a p-value of 0.01 indicates higher likelihood of true hypothesis than a p-value of 0.05. Let's say we get a p-value of 0.05. Why can't we calculate the actual probability that our hypothesis is true? What information are we missing?

Alternate terminology for frequentist statisticians:

If you accept the premise of my question, you can stop reading here. The following is for people who refuse to accept that a hypothesis can have a probability interpretation. Let's forget the terminology for a moment. Instead…

Let's say you are betting with your friend. Your friend shows you a thousand statistical studies about unrelated subjects. For each study you are only allowed to look at the p-value, sample size, and standard deviation of the sample. For each study, your friend offers you some odds to bet that the hypothesis presented in the study is true. You can choose to either take the bet or not take it. After you have made bets for all 1000 studies, an oracle ascends upon you and tells you which hypothesis are correct. This information allows you to settle the bets. My claim is that there exists an optimal strategy for this game. In my worldview that's equivalent to knowing probabilities for hypothesis being true, but if we disagree on that, it's fine. In that case we can simply talk about ways to employ p-values to maximize expectation for the bets.

Contents

Other answers get all philosophical, but I don't see why it is needed here. Let's consider your example:

Our hypothesis is "Vitamin D affects mood" (null hypothesis being "no effect"). Let's say that we perform an appropriate statistical study with 1000 people and find a correlation between mood and vitamin levels. All other things being equal, a p-value of 0.01 indicates higher likelihood of true hypothesis than a p-value of 0.05. Let's say we get a p-value of 0.05. Why can't we calculate the actual probability that our hypothesis is true? What information are we missing?

For \$n=1000\$, getting \$p=0.05\$ corresponds to the sample correlation coefficient \$hat rho=0.062\$. The null hypothesis is \$H_0: rho=0\$. The alternative hypothesis is \$H_1: rhone 0\$.

The p-value is \$\$ptext{-value} = Pbig(|hatrho|ge 0.062 ;big|; rho=0big),\$\$ and we can compute it based on the sampling distribution of \$hatrho\$ under the null; nothing else is needed.

You want to compute \$\$P(H_0;|;text{data})=Pbig(rho=0;big|; hatrho= 0.062big),\$\$

and for this you need the whole bunch of additional ingredients. Indeed, by applying Bayes theorem we can rewrite it as follows:

\$\$frac{Pbig( hatrho= 0.062 ;big|;rho=0big) cdot P(rho=0)}{Pbig( hatrho= 0.062 ;big|;rho=0big) cdot P(rho=0)+Pbig( hatrho= 0.062 ;big|;rhone0big) cdot (1-P(rho=0))}.\$\$

So to compute the posterior probability of the null you need to have two additional things:

1. Prior that the null hypothesis is true: \$P(rho=0)\$.
2. Assumption about how \$rho\$ is distributed if the alternative hypothesis is true. This is needed to compute the \$Pbig( hatrho= 0.062 ;big|;rhone0big)\$ term.

If you are willing to assume that \$P(rho=0)=0.5\$ — even though I personally am not sure why this should ever be a meaningful assumption, — you will still need to assume the distribution of \$rho\$ under alternative. In this case, you will be able to compute something called Bayes factor:

\$\$B=frac{Pbig( hatrho= 0.062 ;big|;rho=0big) }{Pbig( hatrho= 0.062 ;big|;rhone0big)}.\$\$

As you see, Bayes factor does not depend on the prior probability of the null, but it does depend on the prior probability of \$rho\$ (under the alternative).

[Please note that the nominator in the Bayes factor is not the p-value, because of the equality instead of the inequality sign. So when computing Bayes factor or \$P(H_0)\$ we are not using the p-value itself at all. But we are of course using the sampling distribution \$P(hatrho;|;rho=0)\$.]

Rate this post