From what I understand, OLS gives consistent estimates for stationary AR(1) time series but not for unit-root ones. I am trying to illustrate this phenomenon with a small simulation in R but the OLS estimates in the unit-root case seem alright:

`res <- c() for (i in 1:1000) { ar <- c(0) for (j in 2:1000) ar[j] <- 1 * ar[j - 1] + rnorm(1) res[i] <- lm(ar[2:1000] ~ -1 + ar[1:999])$coef[1] } mean(res) `

The above code gives results like 0.998191, 0.9980904, 0.998139, which are very close to the true coefficient of 1. In fact, the same simulation with a stationary AR(1) process often gives estimates that are more far-off. The only difference seems to be that the estimates in the unit-root case have a skewed distribution while the stationary ones do not.

**What am I doing wrong? How can I show the inconsistency of the OLS estimator for unit-root AR(1) processes by simulation?**

Let me clarify why I did think a unit-root AR(1) process could not be estimated by OLS: In Wooldridge’s Introductory Econometrics (2013), Section 11.3, it says: “The previous section shows that, provided the time series we use are weakly dependent, usual OLS procedures are valid under assumptions weaker than the classical linear model assumptions. […] Using time series with strong dependence in regression analysis poses no problem, if the CLM assumptions in Chapter 10 hold. But the usual inference procedures are very susceptible to violation of these assumptions when the data are not weakly dependent, because then we cannot appeal to the law of large numbers and the central limit theorem.” Wooldridge then discusses the unit-root AR(1) model as an example of a strongly dependent time series. Although he does not explicitly claim that the model cannot be estimated by OLS, he also does not state that the CLM assumptions hold for the unit-root AR(1) model. I think this is *very* misleading…

**Contents**hide

#### Best Answer

You will not be able to show this result (by simulation or otherwise) because it does not hold. When the true AR parameter is unity, the OLS estimator is superconsistent, not inconsistent. See for example the discussion in Hamilton's *Time Series Analysis*, section "Asymptotic Properties of a First-Order Autoregression when the True Coefficient is Unity" (17.4).

What you *can* illustrate with simulation is this superconsistency. Simply repeat several Monte Carlo simulations of the OLS estimate (I did just 1000), similar to what was done above, but for a range of sample sizes. See plots below:

What you should observe is that the sample bias of the OLS estimate gets closer to 0 faster in the nonstationary $phi=1$ case (although it started out larger) and the variance shrinks to zero faster as well. OLS works just fine in that case.

Edit: In the text mentioned above, the following expression is derived in the case where the true coefficient is unity:

$$sqrt{T}left(hat{rho}_T-1right) to^P 0$$

Essentially, $hat{rho}_T$ (the OLS estimator) converges to 1 much faster than $sqrt{T}$ goes to infinity. It's typical for consistent estimators to converge at a slower rate, so this behavior is termed "superconsistency".

### Similar Posts:

- Solved – About the stationarity of a sine wave
- Solved – How to transform a unit root process to a stationary process
- Solved – Non-Stationary: Larger-than-unit root
- Solved – KPSS: Difference between level stationary and trend stationary
- Solved – KPSS: Difference between level stationary and trend stationary