I would like to know what will happen to the sample variance (not sample mean variance) as sample size increases. I tried to google but all of them are about sample mean variance rather than sample variance.

My thoughts: it will increase because more samples are taken which means there will be more differences?

**Contents**hide

#### Best Answer

There are two main estimates of the variance. There's the maximum likelihood (ML) estimator $hat{sigma}^2_{ML}=frac{1}{n}underset{i=1}{overset{n}{sum}}(x_i-bar{x})^2$ and there's the unbiased estimator $hat{sigma}^2=frac{1}{n-1}underset{i=1}{overset{n}{sum}}(x_i-bar{x})^2$. The ML is biased since it has expectation $E(hat{sigma}^2_{ML})=frac{n-1}{n}sigma^2$, whereas the other estimator is unbiased.

However, for the question at hand, this distinction between the two estimators is not of interest, since both converge to $sigma^2$ as the sample size goes to infinity. That is, $underset{nrightarrowinfty}{lim}hat{sigma}^2_{ML} = sigma^2$ and $underset{nrightarrowinfty}{lim}hat{sigma}^2 = sigma^2$.

So as the sample size grows, the closer your estimated variance will be to the true variance.

Another way of thinking of this is that if you have observed all observations in the population, you will know the true variance. If you observe all observations except one, you will still have a extremely good estimate. If you have only a few observations, not so much. So the more observations, the better your estimate will be.

### Similar Posts:

- Solved – What will happen to the sample variance as sample size increases
- Solved – What will happen to the sample variance as sample size increases
- Solved – How to get an unbiased estimator
- Solved – In CLT, why $sqrt{n}frac{bar{X}_n-mu}{sigma} rightarrow N(0,1)$ $implies$ $bar{X}_n sim N(mu, frac{sigma^2}{n})$
- Solved – In CLT, why $sqrt{n}frac{bar{X}_n-mu}{sigma} rightarrow N(0,1)$ $implies$ $bar{X}_n sim N(mu, frac{sigma^2}{n})$