I am not sure, but I think R is doing not what it is supposed to do.

Using `binom.test()`

my understanding of the parameter `alternative="greater"`

is the following hypothesis.

$$H_0: p le p_0 quad text{vs.} quad H_1: p > p_0$$

(I mean it says greater and not greater or equal).

P-Value then should be calculated as follows:

$$p(c) = P_{H_0}(Z>c) = 1 – P_{H_0}(Zle c)$$

Here is the equivalent R-Code

`binom.test(8, 10, alternative="greater", p=0.5)$p.value #[1] 0.0546875 1-pbinom(8,10,0.5) # 1-P(c<=Z) #[1] 0.01074219 1-pbinom(8-1,10,0.5) # 1-P(c<Z) #[1] 0.0546875 `

So where is my mistake? Or is R just a little imprecise? And if I am right and R is realy testing the "wrong" (="not exactly what one would expect") hypothesis: What is with the other tests? Does "greater" always mean >= ?

**Contents**hide

#### Best Answer

The p-value is defined as the probability of a result *at least as extreme* as the observed results ($c$, in your notation) and should therefore be $$P_{H_0}(Zgeq c)=1−P_{H_0}(Z<c)=1−P_{H_0}(Zleq c−1).$$

Your error lies in defining the p-value as the probability of a *more extreme* result than what was observed.

Note that for continuous random variables, $P_{H_0}(Zgeq c)=P_{H_0}(Z> c)$, so that it doesn't matter whether equality is included or not. It is the discreteness of the binomial distribution that causes this to matter.

An effect of this is that a test at significance level $alpha=0.05$ rarely has type I error rate (size) equal to $0.05$. The actual size of the one-sided binomial test for different $pin(0,1)$ and $n$ is shown in the figure below. As you can see, the size oscillates quite a lot. It is however bounded by $alpha$ for all $p$.