Solved – Calculate the autocorr. function of ARMA process

I'm new to time series. I would like to calculate this.. but I really don't know how to begin…

$X_{t}=underbrace{phi X_{t-1}}_{AR(1)}+underbrace{epsilon_{t}-theta_{1}epsilon_{t-1}-theta_{2}epsilon_{t-2}}_{MA(2)}$

$gamma_{t, t+h}=operatorname{Cov}left(X_{t}, X_{t+h})=Eleft[left(X_{t}-mu_{t}right)left(X_{t+h}-mu_{t+h}right)right]right.$

$rho_{t, t+h}=frac{gamma_{t, t+h}}{sqrt{gamma_{t, t} gamma_{t+h, t+h}}}=frac{gamma_{t, t+h}}{sigma_{t} sigma_{t+h}}$

All useful comments will be rewarded.

In general the autocovariance function satisfies begin{align} gamma_k &=E(X_{t-k} X_t) \&=E(X_{t-k} (phi X_{t-1}+epsilon_t-theta_1epsilon_{t-1}-theta_2epsilon_{t-2})) \&=phigamma_{k-1} + E(X_{t-k}epsilon_t)-theta_1 E(X_{t-k}epsilon_{t-1})-theta_2 E(X_{t-k}epsilon_{t-2}). tag{1} end{align} Setting $k=0$, (1) simplifies to $$ gamma_0 = phigamma_1 + sigma_epsilon^2(1-theta_1(phi-theta_1)-theta_2(phi(phi-theta_1)-theta_2)), tag{2} $$ since begin{align} E(X_{t}epsilon_t) &=E((phi X_{t-1}+epsilon_t-theta_1epsilon_{t-1}-theta_2epsilon_{t-2})epsilon_{t}) \&=Eepsilon_t^2=sigma_epsilon^2, tag{2a} \E(X_{t}epsilon_{t-1}) &=E((phi X_{t-1}+epsilon_t-theta_1epsilon_{t-1}- theta_2epsilon_{t-2})epsilon_{t-1}) \&=phi E(X_{t-1}epsilon_{t-1}) -theta_1 Eepsilon_{t-1}^2, \&=phisigma_epsilon^2 – theta_1sigma_epsilon^2, tag{2b} \E(X_{t}epsilon_{t-2}) &=E((phi X_{t-1}+epsilon_t-theta_1epsilon_{t-1}-theta_2epsilon_{t-2})epsilon_{t-2}) \&=phi E(X_{t-1}epsilon_{t-2})-theta_2Eepsilon_{t-2}^2, \&=phi(phisigma_epsilon^2 – theta_1sigma_epsilon^2)-theta_2sigma_epsilon^2. tag{2c} end{align} Note how (2c) follows from (2b) which follows from (2a).

Similarly, by setting $k=1$ and $k=2$, you obtain two more equations in addition to (2) (not included since this is self-study) that you can solve for the three unknowns $gamma_0$, $gamma_1$ and $gamma_2$.

For $k>2$ (the order of the MA part), since $X_{t-k}$ is then uncorrelated with $epsilon_t$, $epsilon_{t-1}$, and $epsilon_{t-2}$, (1) simplifies to $$ gamma_k=phi gamma_{k-1}. $$ More generally, for an ARMA$(p,q)$ model, one can show that $gamma_k$ for lags $k>q$ satisfies the linear difference equation $phi(B)gamma_k=0$, where $phi(B)$ is the AR operator polynomial of the model, see e.g. Wei (2007), ch. 3

Similar Posts:

Rate this post

Leave a Comment