Solved – Why does a AR(1) model that’s mean reverting revert back to B0 as opposed to B0 + B1*B0

In time series analysis, an AR$(1)$ model takes the form:

$$x_t = beta_0 + beta_1 cdot x_{t-1} + w_t,$$

where $w_t$ is the white noise term.

In order for the model to be stationary and to converge to the mean, $beta_1$ must be less than one. However, I don't get why such a model converges to B0 as opposed to $beta_0 + beta_1 cdot beta_0$. Surely, $beta_1 cdot beta_0$ is large enough to matter even if $beta_1 < 1$. I can see how the rest of the terms eventually approach zero as you recursively keep on subbing in $x_t$ into $x_{t-1}$ in order to calculate $x_{t+1}$, but even if you keep on doing this to calculate $x_{t+infty}$ there is always a $beta_0 cdot beta_1$ term.

Is my math wrong?

Taking expectations of both sides and letting $mu equiv mathbb{E}(X_t)$ be the stationary mean, you have:

$$mu = beta_0 + beta_1 mu.$$

Rearranging then gives:

$$mu = frac{beta_0}{1-beta_1}.$$

Similar Posts:

Rate this post

Leave a Comment