In basic under-grad statistics courses, students are (usually?) taught hypothesis testing for the mean of a population.
Why is it that the focus is on the mean and not on the median? My guess is that it is easier to test the mean due to the central limit theorem, but I'd love to read some educated explanations.
Because Alan Turing was born after Ronald Fisher.
In the old days, before computers, all this stuff had to be done by hand or, at best, with what we would now call calculators. Tests for comparing means can be done this way – it's laborious, but possible. Tests for quantiles (such as the median) would be pretty much impossible to do this way.
For example, quantile regression relies on minimizing a relatively complicated function.This would not be possible by hand. It is possible with programming. See e.g. Koenker or Wikipedia.
Quantile regression has fewer assumptions than OLS regression and provides more information.
- Solved – PhD in statistics, but unsure of how much pure math is needed as requirements
- Solved – What graphs / plots are best suited to visualise percentiles
- Solved – Correlation with very different sample sizes
- Solved – Sequential hypothesis testing in basic science
- Solved – Do you include the median in calculating the upper and lower quartiles