I am self-learning statistics. I was going through *Probability and Statistics For Engineering and the Sciences* by Jay L. Devore and stumbled upon the following example:

An electronics manufacturer claims that at most 10% of its power supply units need service during the warranty period. To investigate this claim, technicians at a testing laboratory purchase 20 units and subject each one to accelerated testing to simulate use during the warranty period. Let $p$ denote the probability that a power supply unit needs repair during the period (the proportion of all such units that need repair). The laboratory technicians must decide whether the data resulting from the experiment supports the claim that $p leq 0.10$. Let X denote the number among the 20 sampled that need repair, so $X sim text{Bin}(20, p)$.

The explanation given is as follows:

Reject the claim that $p leq 0.10$ in favor of the conclusion that $p>.10$ if $x geq 5$ (where $x$ is the observed value of $X$), and consider the claim plausible if $x leq 4$.

The probability that the claim is rejected when $p=0.10$ (an incorrect conclusion) is

$P(X geq 5 text{ when } p=0.10) = 1-B(4; 20,0.1) = 1-0.957 = 0.043$The probability that the claim is not rejected when p = .20 (a different type of incorrect conclusion) is

$P(X leq 4 text{ when } p=0.2) =B(4; 20,0.2) =.630$The first probability is rather small, but the second is intolerably large. When

$p=0.20$ , so that the manufacturer has grossly understated the percentage of units

that need service, and the stated decision rule is used, 63% of all samples will result

in the manufacturer’s claim being judged plausible!

I am not being able to understand how the solution connects to the given problem. Also what does the last paragraph mean?

Why has $x$ been taken as 5 to reject the claim, shouldn't it be $x=2$ since if more than 2 out of 20 components fail then the claim fails?

Why has a value of $p=0.2$ been used in the second case?

Could someone please explain it clearly?

**Contents**hide

#### Best Answer

Why has x been taken as 5 to reject the claim, shouldn't it be x=2 since if more than 2 out of 20 components fail then the claim fails?

No, you're confusing a sample proportion with a population proportion.

If you toss a coin twenty times and it comes up heads only nine times is that really inconsistent with the true probability being $frac12$, or could something at least as far from a sample proportion of $frac12$ easily happen with a fair coin?

To be able to say that the proportion of heads was actually different from $frac12$ you'd want something that wouldn't be easily explained by ordinary chance (random variation) operating on a fair coin. So for example, if you had say only very few heads or very many heads come up — so few or so many that you would be quite unlikely to see something at least that far away from 50-50 with a fair coin, *then* you might conclude that the proportion of heads was not consistent with $frac12$.

Similarly in this problem, in order to say that the proportion of units needing repair is not consistent with the true probability ($p$) being no more than $0.1$, you'd look at what happens when $p$ is right on the border (since that has the best chance to produce high numbers of repairs) and if there were too many repairs in the sample to be reasonably consistent with random chance operating on the case with $p= 0.1$ then they would conclude that $p$ was bigger than $0.1$.

[If $x$ is large then either (a) $p$ is not bigger than $0.1$ but a rare event occurred – after all there's still some chance you could see at least as many repairs; or (b) $p$ is bigger than $0.1$ and a less-surprising event occurred. As you push $x$ larger, the "$p$ could still be $0.1$" explanation becomes untenable and you're led to conclude that the null isn't a reasonable explanation. You choose where you will draw the line ahead of time in such a way that the probability you'll reject when the null hypothesis ($pleq 0.1$) is small while keeping in mind that the smaller you make it, the more the risk of the other kind of error increases.]

They've chosen to make their rule to reject the hypothesis "reject if $x$ is at least $5$" which (as they explain) makes the probability of rejecting when $p$ really is $0.1$ about 4.3%. (This is the type I error rate at $p=0.1$. This highest probability you will reject the hypothesis when it's actually true is called the significance level.)

Why has a value of p=0.2 been used in the second case?

It's just a convenient example value; they could have chosen some other value that was larger than 0.1 and calculated the probability of that kind of error (failing to reject when $p$ is actually larger than $0.1$, which is called type II error). It's interesting to look at a case that is substantially larger because then you can't really say "well, it's pretty close"; they chose a value that was clearly larger than $0.1$ for which the probability of failing to reject the null hypothesis was quite high. They could have chosen say $0.25$ or $0.15$, but $0.101$ would probably not be especially interesting to consider, especially if you're only calculating this probability at one value of $p$ that's larger than $0.1$.

Alternatively they could have looked at a sequence of values (e.g. $p=0.15,0.20,0.25,0.3$) and seen how likely they would be to fail to pick up that $p$ was too large for each one (this is actually a good thing to do, but in this case it wasn't done because they're trying to keep it simple — the method is the same, it's just the numbers that change)

### Similar Posts:

- Solved – Why is p for 8 times heads out of 21 flips not 8/21
- Solved – the concepts of nominal and actual significance level
- Solved – Mathematica’s random number generator deviating from binomial probability
- Solved – Calculating confidence interval for whether some of N coins are unfair
- Solved – Calculating confidence interval for whether some of N coins are unfair