In statistics, it is very important to differentiate between the following three concepts which are often confused and mixed by students.

Usually, books denote by $theta$ an unknown **parameter**. Then we wish to estimate it. We use an **estimator** which books usually denote by $widehat{theta}$. The estimator is a random variable! Usually we seek $E[widehat{theta}]=theta$ and so on and on, anyways. An **estimate** is the value we obtain by sampling and inserting our values in our estimator.

A classical example is:

- Parameter: population mean $mu$.
- Estimator $overline{X}=frac{1}{n}sum_{i=1}^{n} X_i$ based on
*a priori*observations $X_1,dots,X_n$. And then, we sample observations $x_1,dots,x_n$ and compute $overline{x}=frac{1}{n}sum_{i=1}^{n} x_i$. - Obs: $mu$ is an unknown number. $overline{X}$ is a random variable, and $overline{x}$ is a number!

With concrete latin letters it seems easy to stress this fact, but when we use $theta$ and $widehat{theta}$ (the classical hat notation for estimator) I do not know how to stress this fact. I do not know how to differentiate between $widehat{theta}$ and a specific observed $widehat{theta}$.

Some books propose: $widehat{theta}_{obs}$ which I do not like for instance if we talk about two population proportions $p_1$ and $p_2$ and their estimators $widehat{p}_1$ and $widehat{p}_2$. Because then it would look like $widehat{p}_{1,obs}$ which is not aesthetic.

What solutions do you propose? Has anyone seen a nice notation for this?

**Contents**hide

#### Best Answer

There is no single answer to this question because different authors may use different notation. For me, the most handy notation is the one used, for example, by Larry Wasserman in *All of Statistics*:

By convention, we denote a point estimate of $theta$ by $hattheta$ or $widehattheta_n$. Remember that $theta$ is a fixed, unknown quantity. The estimate $hattheta$ depends on the data so $hattheta$ is a random variable.More formally, let $X_1,dots,X_n$ be $n$ iid data points from some distribution $F$. A point estimator $widehattheta_n$ of a parameter $theta$ is some function of $X_1,dots,X_n$:

$$ widehattheta_n = g(X_1,dots,X_n). $$

So $theta$ is the unknown parameter, $hattheta$ is the estimate, and a function $g$ of the sample is the estimator. Such notation makes it also clear that $g$ is a function.

### Similar Posts:

- Solved – Show that $hattheta=frac{2 bar Y- 1}{1- bar Y}$ is an unbiased estimator for $theta$
- Solved – Does the biased estimator always have less variance than unbiased one
- Solved – Hypothesis test for composite null hypothesis of exponential parameter
- Solved – Standard error notation
- Solved – How to find maximum likelihood of multiple exponential distributions with different parameter values