Suppose we have two independent Poisson-distributed variables $X_1$ and $X_2$. We want to test whether the Poisson parameters are equal, i.e. whether $lambda_1=lambda_2$.

Now we have 4 distinct statistical *exact* tests to choose:

- E-test (see Krishnamoorthy and Thomson, A more powerful test for comparing two Poisson means,
*Journal of Statistical Planning and Inference*119 (2004) 23–35; see also Checking if two Poisson samples have the same mean on Cross Validated) `poisson.exact(tsmethod="central")`

`poisson.exact(tsmethod="minlike")`

`poisson.exact(tsmethod="balker")`

(from exactci R package)

Now, given all those tests are labeled as "exact", one would expect all yield the same p-values. Contrary to this, the quoted paper clearly illustrates that methods 2-4 give different significance. Furthermore, I personally implemented the E-test and found that this test gives yet another, distinct result. Why is that?

**Contents**hide

#### Best Answer

The p-value of a hypothesis test or a corresponding confidence interval depends on the treatment or choice of 2 issues:

**1. Treatment of nuisance parameter**

To preserve the size at the exact level, the type 1 error needs to be less than or equal to alpha for all possible values of the nuisance parameter. The null hypothesis that the rates are equal, does not constrain the value itself, so it is a nuisance parameter.

Conditional tests like the exact conditional Poisson test or Fisher's exact test remove the nuisance parameter by conditioning on a summary statistic.

Unconditional exact test need to assert that the size is correct by using the max or sup over all possible values of the nuisance parameter. Berger-Boos test limits the space of nuisance parameter for the `max`

but adds a factor to make it exact, i.e. preserves the size alpha.

The Poisson E-test is not an exact test in this sense. It uses the "exact" distribution but it uses the estimated value of the nuisance parameter.

**2. Location of two-sided rejection region**

A two-sided test has rejection region in both lower and upper tail. The requirement that the sixe of the test is at most alpha, is a requirement on the probability to be in one of the tails, but it does not pin down the probability in each tail separately.

"central" or equal tail methods limit the probability of the each tail to be less than or equal to half the size, alpha / 2.

"minlike" uses the likelihood value (based on likelihood ratio test) to find the non-rejection region. The corresponding profile confidence interval will not have equal tails in skewed distributions like Poisson or Binomial.

One point that Michael P. Fay points out and emphasizes is the hypothesis tests and confidence interval are often not consistent with each other.

For example the exact poisson test in R uses "minlike" hypothesis test and exact pvalue, but reports exact "central" or equal-tail confidence intervals.

In one-sided test, the location is fixed and this distinction between "minlike" and "central" becomes irrelevant. Because there is only one tail, an exact test needs to preserve the size for that tail at level alpha.

### Similar Posts:

- Solved – Conflict between Poisson confidence interval and p-value
- Solved – the difference using a Fisher’s Exact Test vs. a Logistic Regression for $2 times 2$ tables
- Solved – How does R’s “poisson.test” function work, mathematically
- Solved – On Fisher’s exact test: What test would have been appropriate if the lady hadn’t known the number of milk-first cups
- Solved – On Fisher’s exact test: What test would have been appropriate if the lady hadn’t known the number of milk-first cups