The mean of a dataset is often represented by the Greek letter $mu$, and the standard deviation of a dataset is often represented by the Greek letter $sigma$. But what about the standard error? I've seen authors use SE, se, $sigma_bar{x}$, and $s_bar{x}$. The Wikipedia article on standard error uses both SE and $sigma_bar{x}$. Is there a standard or commonly used symbol to refer to the standard error of a set of measurements, like $mu$ for mean and $sigma$ for standard deviation?

**Contents**hide

#### Best Answer

A subscript on a symbol often indicates what the symbol refers to. For example, $mu_X$ is often used to represent the population mean of the variable $X$, and it would important to use it to distinguish it from $mu_Y$, the population mean of variable $Y$. Usually, a hat (e.g., $hat mu _X$) indicates that a quantity is an estimator of the parameter over which that hat is placed (i.e., $hat mu _X$ is an estimator of $mu_X$). (In this case, it happens to be that the sample mean, $bar X=n^{-1}sum_i{X_i}$, is often used for $hat mu _X$, but other estimators are possible as well.) When only one variable is being discussed or the parameter in general is being discussed, you can omit the subscript with the understanding that the symbol refers to what you intend it to.

The standard error is the standard deviation of the distribution of an estimator for a given population under specified sampling conditions. Because it's the standard deviation ($sigma$) of an estimator (hat) of a parameter (e.g., $theta$), it makes sense to use $sigma _{hattheta}$. This is the standard notation that I have seen. When $bar X$ is the chosen estimator, $sigma_{bar X}$ could also be used to be more specific. When talking about standard errors broadly, it makes sense to just use the words "standard error" or its common abbreviation, SE. When talking about the standard error of a specific estimator, it makes sense to use its symbol to reduce ambiguity.

Note that in data applications, we often deal with estimates from an *estimator* of the standard error, i.e., $hat sigma _{hat theta}$, which itself has a standard error because it is an estimator and its estimates vary from sample to sample. We might denote that standard error as $sigma _{hat sigma _{hat theta}}$. This might be relevant if you are comparing multiple estimators of the standard error and you want the one that is the most precise, i.e., that itself has a low standard error. For example, the maximum likelihood, unbiased least squares, and HC0 sandwich standard errors are all estimators of the standard error of a regression slope, but the unbiased least squares estimator tends to have the lowest standard error (i.e., is the most precise estimator of the true standard error of the least-squares estimator of the regression slope).