Solved – Understanding Fisher’s combined test

I am using Fisher's combined test to fuse several different independent tests. I have a problem understanding the results in some cases.

Example:
Let's say I run two different tests, both with the hypothesis that mu is smaller than 0. Let's say that n is identical and the two samples have the same calculated variance. However, let's assume that one test yielded an average that is $1.5$ and the other is $-1.5$. I will get two complementing p-vals (e.g., $0.995$ & $0.005$). Interestingly, combining the two brings about a significant $p$-value in the Fisher test: $p=0.0175$.

This is weird because I could have chosen the exact opposite test $(mu>0)$ and sampled results – and still get $p=0.0175$. It's almost as if the Fisher test does not take the direction of the hypothesis into account.

Can anyone explain this?

Thanks

The Fisher combination test is intended to combine information from separate tests done on independent data sets in order to obtain power when the individual tests may not have sufficient power. The idea is that if the $k$ null hypotheses are all correct the $p$-value will be uniformly distributed on $[0,1]$ independently of each other. This means that $-2 ∑ log(p_i)$ will be $chi^2$ with $2k$ degrees of freedom. Rejecting this combined null hypothesis leads to the conclusion that at least one of the null hypotheses is false. That is what you are doing when you apply this procedure.

Similar Posts:

Rate this post

Leave a Comment