# Solved – Asymptotic distribution of \$sqrt{n}left(hat{sigma_{1}^{2}}-sigma^2right)\$

I'm trying to find a confidence interval for variance $$sigma^{2}$$ when some sample $$X_{1},…,X_{n}$$, with mean $$mu$$ known, may have violated normality assumption. To do this I'm investigating the asymptotic distribution of $$sqrt{n}left(hat{sigma_{1}^{2}}-sigma^{2}right)$$, where $$hat{sigma_{1}^{2}} = frac{1}{n}sum_{i=1}^{n}(X_{i}-mu)^{2}$$, in the hopes that it can be used as a pivot.

However, I'm struggling with deriving the asymptotic distribution. I have a hint that you're meant to utilise the fact that $$hat{sigma_{1}^{2}}$$ is itself a mean of a sample from the $$(X-mu)^{2}$$ distribution.

I've tried using the delta method, which states $$sqrt{n}(bar{X_{n}}-mu)rightarrow N(0,sigma^2)$$, but I can't seem to get anywhere.

Any ideas on deriving the asymptotic distribution?

Contents

No delta method is required to get the asymptotic distribution of $$widehat{sigma}^2$$.

Asymptotic Distribution of $$widehat{sigma}^2 = frac{1}{n}sum_{i=1}^n (X_i-overline{X})^2$$:

In the following we assume an iid sample $$X_1, dots, X_n$$ with $$0, where $$mathbb{E}(X_i)=mu$$. We then have by the usual decomposition:

begin{align*} widehat{sigma}^2 & = frac{1}{n}sum_{i=1}^n (X_i-overline{X})^2\ & = frac{1}{n}sum_{i=1}^n ((X_i- mu) + (mu-overline{X}))^2\ & = frac{1}{n}sum_{i=1}^n (X_i- mu)^2 + 2 frac{1}{n} sum_{i=1}^n(X_i- mu)cdot(mu-overline{X}) + frac{1}{n}sum_{i=1}^n(overline{X}-mu)^2\ & = frac{1}{n}sum_{i=1}^n (X_i- mu)^2 + 2 frac{1}{n}sum_{i=1}^n (X_imu – X_ioverline{X}-mu^2+muoverline{X}) + (overline{X}^2-2overline{X}mu +mu^2)\ &=underbrace{frac{1}{n}sum_{i=1}^n (X_i- mu)^2}_{A} + underbrace{(overline{X}-mu)^2}_{B} end{align*} Part A is our main term. By assumption we have $$0 and hence, by an application of the Lindeberg-Lévy central limit theorem with $$mathbb{E}((X_i-mu)^2)=sigma^2$$ we derive for Part A: $$sqrt{n}(widehat{sigma}^2-sigma^2) stackrel{d}{to}mathcal{N}(0,varsigma^2).$$

Part B on the other hand is asymptotical negligible: begin{align*} sqrt{n}B & = sqrt{n}(overline{X}-mu)^2\ & = (sqrt{n}(overline{X}-mu))(overline{X}-mu) \ & = (sqrt{n}(overline{X}-mu))(0 + o_p(1)) \ & = O_p(1)(0+o_p(1)) \ & = o_p(1). end{align*} In words: $$(overline{X}-mu)$$ converges in probablilty to the constant $$0$$. At the same time $$(sqrt{n}(overline{X}-mu))$$ converges in distribution to a normal distributed random variable. Hence $$(sqrt{n}(overline{X}-mu))(overline{X}-mu)$$ converges by the Slutsky theorem in distribution to $$0$$, and since $$0$$ is a constant, $$(sqrt{n}(overline{X}-mu))(overline{X}-mu)$$ converges additionally to $$0$$ in probability.

We can thus conclude by another application of the Slutsky theorem that: $$sqrt{n}(widehat{sigma}^2 – sigma^2) stackrel{d}{to} mathcal{N}(0,varsigma^2),$$ where $$varsigma^2= Var((X_i-mu)^2) = mathbb{E}((X_i-mu)^4) – (mathbb{E}(X_i-mu)^2)^2 = mathbb{E}((X_i-mu)^4) – sigma^4.$$

Extra: asymptotic $$(1-alpha)$$-confidence interval for $$sigma^2$$

Let $$z_{alpha}$$ be the $$alpha$$-quantil of a standard normal distribution. An asymptotical $$1-alpha$$-confidence interval for $$sigma^2$$ is then given by $$[widehat{sigma}^2- z_{1-alpha/2} frac{varsigma}{n},widehat{sigma}^2 + z_{1-alpha/2}frac{varsigma}{n}]$$ where we obviously have to replace the unknown $$varsigma = sqrt{mathbb{E}((X_i-mu)^4) – sigma^4}$$ by a consistent estimator for $$varsigma$$, i.e. using $$widehat{varsigma} = sqrt{frac{1}{n}sum_{i=1}^n (X_i – overline{X})^4 – widehat{sigma}^4}.$$

The confidence follows since: begin{align*} 1-alpha & = Pleft(-z_{1-alpha/2}leq sqrt{n}frac{widehat{sigma}^2-sigma^2}{varsigma}leq z_{1-alpha/2}right)\ & = Pleft(widehat{sigma}^2- z_{1-alpha/2} frac{varsigma}{n} leq sigma^2 leq widehat{sigma}^2 + z_{1-alpha/2}frac{varsigma}{n}right). end{align*}

Rate this post