What is the motivation of introducing an additional level of indirection from the descriptive 'false positive' to the integer '1'? Is 'false positive' really too long?

**Contents**hide

#### Best Answer

Great question, motivated me to Google it 🙂 Per Wikipedia (with minor formatting edits):

A type I error (or error of the first kind) is the incorrect rejection of a true null hypothesis.

A type II error (or error of the second kind) is the failure to reject a false null hypothesis.

Further down the page it discusses the etymology:

In 1928, Jerzy Neyman (1894–1981) and Egon Pearson (1895–1980), both eminent statisticians, discussed the problems associated with "deciding whether or not a particular sample may be judged as likely to have been randomly drawn from a certain population" …

"…in testing hypotheses two considerations must be kept in view, (1) we must be able to reduce the chance of rejecting a true hypothesis to as low a value as desired; (2) the test must be so devised that it will reject the hypothesis tested when it is likely to be false."

They also noted that, in deciding whether to fail to reject, or reject a particular hypothesis amongst a "set of alternative hypotheses", $H_1$, $H_2$, . . ., it was easy to make an error:

"…[and] these errors will be of two kinds:

(I) we reject $H_0$ [i.e., the hypothesis to be tested] when it is true (II) we fail to reject $H_0$ when some alternative hypothesis $H_A$ or $H_1$ is true." In the same paper they call these two sources of error, errors of type I and errors of type II respectively.

So it looks like the first type of error was based on Fisher's original work on significance testing. The second type of error was based on Neyman and Pearson's extension of Fisher's work, namely the introduction of the alternative hypothesis and hence hypothesis testing. See here for more detail.

It appears that the order in which these types of errors were identified correspond to their number, as given by Neyman and Pearson.

### Similar Posts:

- Solved – Interpreting p-values in Fisher vs Neyman-Pearson frameworks
- Solved – When to use Fisher and Neyman-Pearson framework
- Solved – constraint on the sum of the type-I & type II error probabilities
- Solved – Hypothesis testing with the geometric distribution
- Solved – What are the ”desirable” statistical properties of the likelihood ratio test