I am wondering how impulse response captures information differently than other statistical techniques such as cross_correlation? To elaborate on what I mean, I will describe an example I encountered. Imagine I have one time series (let's call this time series A), and I have several time series that could have impacted A. The several time series that could have impacted A may even depend on one another in a specific way, and I don't have prior knowledge about their relationships. Therefore, I won't necessarily know the ordering of what impacted what, but think of it as some sort of chain: time series B might impact C which impacts F which impacts A (B->C->F->A). If something goes wrong in A (eg. there is abnormal spiking), I want to know what is responsible (which should be B in this, because I know the structure of how things are related). If I simply look at correlation, F will have the highest correlation with B because they are the closest to each other and most similar (and this was indeed the case when I ran cross-correlation). However, impulse response appears to be more nuanced. It was able to say that B has the highest coefficient. However, how is impulse response able to do this? If we add a shock, intuitively why does it make sense that B has the biggest impact instead of A (how does the algorithm understand this)? Why does IRF not fall into the same trap/limitations that something like simple cross-correlation would do.

Additionally, if anyone suggests any good videos, textbook readings, blogs/posts, etc. that can explain and unpack impulse response in a less technical way, that would be greatly appreciated.

Even if you cannot answer all aspects of my question, any information about impulse response used for time series and VAR would be of great use. Thank you so much 🙂

**Contents**hide

#### Best Answer

Impulse-response analysis is quite simple. Having estimated a vector autoregressive (VAR) model and expressed it in a vector moving-average (VMA) representation, you are able to see how a shock to variable B affects variable A in subsequent periods. You just plug in the shock in the VMA representation. For example, if the VMA equation for variable A is $$ A_t = mu_A + theta_{1AA} varepsilon_{A,t-1} + theta_{1AB} varepsilon_{B,t-1} + … + theta_{2AA} varepsilon_{A,t-2} + theta_{2AB} varepsilon_{B,t-2} + … + varepsilon_{A,t}, $$ the shock to B from one period before, $varepsilon_{B,t-1}$, will have an effect of $theta_{1AB}$ and the shock to B of two periods before, $varepsilon_{B,t-2}$ will have an effects of $theta_{2AB}$, etc.

In a particular time period, how do you know it was a shock to variable B that is affecting A? You have the estimated shocks $hatvarepsilon$ from the VAR model, so you can calculate the effects you are interested in, at any time period, for any variable.

### Similar Posts:

- Solved – How to explain and interpret impulse response function (for timeseries)
- Solved – Linear VAR impulse responses – sensitivity of confidence interval bands to shock size
- Solved – Interpreting VAR impulse response
- Solved – Forecasting with ARMA models – how do you estimate the error terms for use with the MA coefficients
- Solved – the lag associated with Moving Average smoothing