Solved – Test if two samples of binomial distributions comply with the same p

Suppose, I have done:

  • $n_1$ independent trials with an unknown success rate $p_1$ and observed $k_1$ successes.
  • $n_2$ independent trials with an unknown success rate $p_2$ and observed $k_2$ successes.

If, now $p_1 = p_2 =: p$ but still unknown, the probability $p(k_2)$ to observe $k_2$ for a given $k_1$ (or vice versa) is proportional to $int_0^1 B(n_1,p,k_1) B(n_2, p, k_2) text{d}p = frac{1}{n_1+n_2+1}binom{n_1}{k_1}binom{n_2}{k_2}binom{n_1+n_2}{k_1+k_2}^{-1}$, so if I want to test for $p_1 neq p_2$, I only need to look in which quantile of the corresponding distribution my observations are.

So far for reinventing the wheel. Now my problem is that I fail to find this in literature, and thus I wish to know: What is the technical term for this test or something similar?

The test statistics $p(k_2)$ is that of Fisher’s Exact Test.

Since $$sum_{k_2}^{n_2} frac{1}{n_1+n_2+1}binom{n_1}{k_1}binom{n_2}{k_2}binom{n_1+n_2}{k_1+k_2}^{-1} = frac{1}{n_1+n_2+1},$$ normalisation can be obtained by multiplying with $n_1+n_2+1$ and thus: $$p(k_2) = binom{n_1}{k_1}binom{n_2}{k_2}binom{n_1+n_2}{k_1+k_2}^{-1}.$$

Similar Posts:

Rate this post

Leave a Comment