I am using Fisher's combined test to fuse several different independent tests. I have a problem understanding the results in some cases.
Example:
Let's say I run two different tests, both with the hypothesis that mu is smaller than 0. Let's say that n is identical and the two samples have the same calculated variance. However, let's assume that one test yielded an average that is $1.5$ and the other is $-1.5$. I will get two complementing p-vals (e.g., $0.995$ & $0.005$). Interestingly, combining the two brings about a significant $p$-value in the Fisher test: $p=0.0175$.
This is weird because I could have chosen the exact opposite test $(mu>0)$ and sampled results – and still get $p=0.0175$. It's almost as if the Fisher test does not take the direction of the hypothesis into account.
Can anyone explain this?
Thanks
Best Answer
The Fisher combination test is intended to combine information from separate tests done on independent data sets in order to obtain power when the individual tests may not have sufficient power. The idea is that if the $k$ null hypotheses are all correct the $p$-value will be uniformly distributed on $[0,1]$ independently of each other. This means that $-2 ∑ log(p_i)$ will be $chi^2$ with $2k$ degrees of freedom. Rejecting this combined null hypothesis leads to the conclusion that at least one of the null hypotheses is false. That is what you are doing when you apply this procedure.
Similar Posts:
- Solved – Is it necessary to do a Bonferroni correction on a) exploratory analysis b) correlations or c) if there are 3 different dependent variables
- Solved – How to fix the threshold for statistical validity of p-values produced by ANOVAs
- Solved – Seeking to understand asymmetry in hypothesis testing
- Solved – Fisher’s exact test in RNA-Seq
- Solved – Why does the research findings contradict the hypothesis result