For consistency of OLS estimator for linear model

$$

y_i = beta^T x_i + epsilon_i, ; i = 1,cdots, n,

$$

the model assumptions are usually (the ones I am familiar with)

- The sequence of random vectors ${ (x_i, epsilon_i) }$ is (jointly) strictly stationary and ergodic.
- $mathbb{E}[x_i epsilon_i] = 0$ for all $i$.

LLN then gives consistency. Now if one of the regressors, say $(x_i)_1$, is $i$—deterministic linear time trend if $i = t$ and we have a time series, is consistency still possible? I suppose one trick is to run the regression on the first differences

$

y_i – y_{i-1} = beta^T (x_i – x_{i-1}) + (epsilon_i – epsilon_{i-1}), ; i = 1,cdots, n,

$

and assume the additional orthogonality condition $mathbb{E}[epsilon_i x_{i pm 1}] = 0$. Are there standard/more sophisticated ways to deal with this? What about more general time trends—cyclic, exponential, etc?

**Contents**hide

#### Best Answer

Not only is the OLS estimator consistent in the presence of a deterministic trend, it is, as they say, *superconsistent*, because it converges to the true value of the coefficient on the linear trend faster then the usual $O(T^{-1/2})$ rate -at $O(T^{-3/2})$. The estimator for the constant term converges at the usual rate, which makes a bit more complicated the derivation of the vector function that has an asymptotic distribution.

Consider

$y_t = beta t + u_t$

The OLS estimator will be

$$hat beta = frac {sum_{t=1}^Tty_t}{sum_{t=1}^Tt^2}= beta +frac {sum_{t=1}^Ttu_t}{sum_{t=1}^Tt^2}$$

One way to see the convergence is to consider

$$operatorname {Var}(hat beta) = sigma^2_ufrac {sum_{t=1}^Tt^2}{left(sum_{t=1}^Tt^2right)^2} = sigma^2_ufrac {1}{sum_{t=1}^Tt^2}$$

$$Rightarrow operatorname {Var}(hat beta) = frac {sigma^2_u}{T(T+1)(2T+1)/6}$$

This obviously goes to zero -and *fast*. So the variance of the estimator goes to zero (and it is also unbiased), which are sufficient conditions for consistency.

The result does not change if we add regressors in the specification -again we will have different rates of convergence of the estimators of the stationary regressor coefficients. It will require of course more sophisticated treatment.

Hamilton's "Time Series Analysis", ch. 16 contains a more elaborate discussion, examining also the asymptotic distribution of the estimator.

In short, from the two assumptions that you stated in your question, assumption 1) is a "convenient overkill", as regards consistency in the presence of a deterministic trend, and it is mainly needed for stochastic regressors. Note that it is critical for these results that we are talking about a *deterministic* trend.