# Solved – the logic behind method of moments

Why in "Method of Moments", we equate sample moments to population moments for finding point estimator?

Where is the logic behind this?

Contents

A sample consisting of \$n\$ realizations from identically and independently distributed random variables is ergodic. In a such a case, "sample moments" are consistent estimators of theoretical moments of the common distribution, if the theoretical moments exist and are finite.

This means that

\$\$hat mu_k(n) = mu_k(theta) + e_k(n), ;;; e_k(n) xrightarrow{p} 0 tag{1}\$\$

So by equating the theoretical moment with the corresponding sample moment we have

\$\$hat mu_k(n) = mu_k(theta) Rightarrow hat theta(n) = mu_k^{-1}(hat mu_k(n)) = mu_k^{-1}[mu_k(theta) + e_k(n)]\$\$

So (\$mu_k\$ does not depend on \$n\$)

\$\$text{plim} hat theta(n) = text{plim}big[mu_k^{-1}(mu_k(theta) + e_k)big] = mu_k^{-1}big(mu_k(theta) + text{plim}e_k(n)big)\$\$

\$\$=mu_k^{-1}big(mu_k(theta) + 0big) = mu_k^{-1}mu_k(theta) = theta\$\$

So we do that because we obtain consistent estimators for the unknown parameters.

Rate this post