# Solved – Adding a white noise process to an AR(P) process

How to show that \$y_t=x_t+nu_t\$, where \$x_t\$ is an \$AR(p)\$ process and \$nu_t\$ is a white noise, follows an ARMA(p,p) process?

Say \$x_t=phi x_t + epsilon_t\$. Then replacing \$x_t=y_t-nu_t\$, we get \$y_t=phi y_{t-1}+nu_t-phi nu_{t-1} + epsilon_t\$. Doing this with an AR(p) process always yields the \$epsilon_t\$ adding the MA(p) process.

Contents

You have $$y_t = x_t + v_t tag{1}$$ and $$phi(B)x_t = e_t.$$ Applying $$phi(B)$$ to both sides of (1) yields begin{align} phi(B)y_t &= phi(B)x_t + phi(B) v_t \ &= e_t + phi(B) v_t. tag{2} end{align} Consider the right hand side of (2). This is clearly a covariance stationary process. By the Wold decomposition theorem it must have a moving average representation. Since the autocovariance function cuts off for lags $$k>p$$ it must be a $$MA(p)$$ process, say $$(1-theta_1B-dots-theta_p B^p) u_t$$. Hence, $$y_t$$ must be a $$ARMA(p,p)$$ process.

From the left hand side of (2), it is clear that its autoregressive parameters are equal to those of $$x_t$$. The moving average parameters $$theta_1,theta_2,dots,theta_p$$ and the white noise variance $$sigma_u^2$$ of this $$ARMA(p,p)$$ process can be found by equating the autocovariance function of the right hand side of (2) with that of $$theta(B) u_t$$ for lags $$k=0,1,dots,p$$ and solving the $$p+1$$ resulting non-linear equations begin{align} (1+theta_1^2+dots+theta_p^2)sigma_u^2 &= sigma_e^2 + (1+phi_1^2 +dots +phi_p^2)sigma_v^2\ (-theta_1 + theta_1theta_2 +dots+theta_{p-1}theta_p)sigma_u^2 &= (-phi_1 + phi_1phi_2 +dots+phi_{p-1}phi_p)sigma_v^2\ &vdots tag{3} \ (-theta_{p-1} + theta_1theta_p)sigma_u^2 &= (-phi_{p-1} + phi_1phi_p)sigma_v^2 \ theta_p sigma_u^2&= phi_psigma_v^2. end{align}

Here is a R-function that solves these equations and returns the parameters of the $$ARMA(p,p)$$-model.

``arplusnoise2arma <- function(phi,se = 1,sv) {   p <- length(phi) # order of process   # autocovariance of right hand side   gamma0 <- ltsa:::tacvfARMA(theta=phi, maxLag = p,sigma2 = sv)   gamma0 <- gamma0 + se   # non-linear equations to solve resulting from equating autocov functions   f <- function(par) {      gamma1 <- ltsa::tacvfARMA(theta=par[1:p], maxLag = p, sigma2 = exp(par[p+1]))     gamma0-gamma1   }   # solve the non-linear system   fit <- rootSolve:::multiroot(f, c(phi,1), maxiter=1000, rtol=1e-12)   # parameters of the new ARMA, possibly non-invertible   theta <- fit$$root[1:p] sigma2 <- exp(fit$$root[p+1])   # reparameterize the MA-part to make it invertible by moving roots outside unit circle   r <- 1/polyroot(c(1,-theta))   for (i in 1:p) {     if (Mod(r[i])>1) {       sigma2 <- sigma2*r[i]^2       r[i] <- 1/r[i]     }   }   sigma2 <- Re(sigma2)   # compute the new coefficients of the MA-polynomial   polycoef <- 1   for (i in 1:p)     polycoef <- c(polycoef,0) - r[i]*c(0,polycoef)   theta <- Re(-polycoef[-1])   # return the invertible ARMA(p,p) model   list(model=list(phi=phi,theta=theta,sigma2=sigma2),estim.precis=fit\$estim.precis) } ``

The following example checks that the autocovariance functions indeed are the same for a simple stationary AR(3) model and the computed ARMA(3,3) model:

``> phi <- c(.2, -.1, .2)  > Mod(polyroot(c(1,-phi)))  1.678659 1.725853 1.725853 > result <- arplusnoise2arma(phi,1,.5) > result $$model$$model\$phi   0.2 -0.1  0.2  $$model$$theta   0.07286795 -0.04104890  0.06545496  $$model$$sigma2  1.527768   \$estim.precis  4.176867e-14  > do.call(ltsa:::tacvfARMA, c(result\$model, maxLag=10))    1.5793650794  0.1904761905 -0.0317460317  0.1904761905  0.0793650794 -0.0095238095    0.0282539683  0.0224761905 -0.0002349206  0.0033561905  0.0051899683 > ltsa:::tacvfARMA(phi=phi,theta=NULL,maxLag=10)    1.0793650794  0.1904761905 -0.0317460317  0.1904761905  0.0793650794 -0.0095238095    0.0282539683  0.0224761905 -0.0002349206  0.0033561905  0.0051899683 ``

Rate this post