I discuss estimation of the following $ARX(1)$ model:

$Y_t=α+βX_t+u_t$ where $u_t=ρu_{t-1}+ϵ_t$

Substituting the the value of $u_t$ in the first equation into the second, we have that:

$Y_t-α- βX_t =ρ(Y_{t-1}-α-βX_{t-1})+ϵ_t$

Rearranging, we have:

$Y_t=α(1-ρ)+ρY_{t-1}+ βX_t-ρβX_{t-1}+ϵ_t$

Can we obtain estimates for the parameters in equation 2 using OLS? Well, yes. However, notice that if we did this, we’d be ignoring the fact that the parameters in the model are restricted, in particular the parameter of $X_{t-1}$, $- ρβ$, is the product of the coefficients on $Y_{t-1}$ and $X_{t}$. We should utilise this information. The OLS estimator will be unbiased, consistent, and BLUE (best linear unbiased estimator), but by utilising the information regarding the restriction of parameters, we can find a better estimator than OLS. In other words, the non-linear estimator produced by the Marquardt algorithm will be superior to OLS. Unsurprisingly, EViews estimates all ARMAX models using the Marquardt algorithm.

Consider the following 3 situations:

$1.$ If $ρ=0$, then our model becomes:

$Y_t=α+ βX_t+ϵ_t$

The best thing to do in this case is to ignore $Y_{t-1}$ and $X_{t-1}$, since we know, in theory, they are irrelevant in explaining $Y_{t}$. The Marquardt algorithm is not necessary.

$2.$ If $β=0$ and $ρ ≠1$, then our model becom$Y_t=α(1-ρ)+ρY_{t-1}+ϵ_t$

The best thing to do in this case is to ignore $X_t$ and $X_{t-1}$ since we know they are irrelevant variables.

$3.$ If, in the population, $X_{t-1}$ is uncorrelated with the other regressors, in particular $X_{t}$, then we can just regress $Y_t$ on a constant, $Y_{t-1}$, $X_{t}$. Again, this is extremely unlikely. Especially so because we are dealing with time series variables that are in all but the most rare circumstances autocorrelated. If, in the sample, $X_{t-1}$ is uncorrelated with the other regressors, a very unlikely scenario, then the OLS estimates of the $α(1-ρ)$, $ρ$ and $β$ when we ignore $X_{t-1}$ are exactly the same as from the Marquardt algorithm, by the Frisch-Waugh-Lovell theorem, as long as the estimate of $ρ$ is not precisely zero (which is, in any case, unlikely). We can obtain the estimate of $α$ from the Marquardt algorithm by simply multiplying the OLS estimate of the constant by 1 minus the OLS estimate of $ρ$.

This is not so much a question. I would like your expertise regarding whether you think my assertions are correct. It is very difficult to find answers to this online. It is also quite an interesting problem.

Your help is greatly appreciated. Thank you.

Christian

**Contents**hide

#### Best Answer

I assume when you talk about "doing some algebra", you mean that you will simply run a regression of Y = a + bY(-1) + cX + dX(-1), then saying that alpha = a / (1-b).

This cannot be done. You're ignoring the restrictions on rho. Specifically, you're ignoring the fact that rho is also part of the coefficient on X(-1).

There is no way, once you have Xs involved, to switch from the OLS to the ARMA model. If there were, software packages would use OLS!

### Similar Posts:

- Solved – R and EViews differences in AR(1) estimates
- Solved – Is it ok to call model learning in machine learning an “estimator”
- Solved – Difference between SAS and R results – Nonlinear Regression
- Solved – Uncertainty on independent variable of a non linear model
- Solved – the ‘true’ value of a probability parameter