The parallel trends assumption for a difference-in-differences analysis does not require the level of the trends to be similar (just parallel). However, in a case where we want the levels to be similar, can we match on the pre-treatment outcomes? Is there a reason not to do this? Should we just match on pre-treatment covariates to try to reduce the gap between the levels of the treated and control groups before the treatment?
Combining difference-in-differences (DiD) and conditioning on pre-treatment outcomes is a practice that is being used in the applied literature (at least in economics). My understanding of the literature is that the conditions under which this is good practice are not completely clear. This paper, which I find relevant for this question, argues that one should be careful about this combination. Conditioning on pre-treatment outcomes does not necessarily (in theory) reduce the bias of the DiD estimator. In simulations, the author actually finds that conditioning on pre-treatment could increase the bias. A safer practice, according to the paper, is to increase the set of pre-treatment covariates (rather than outcomes) one controls for, to improve the validity of the parallel trend assumption.
- Solved – Accounting for Violation of Parallel Trend Assumption in Diff-in-Diff with Propensity-Score Matching
- Solved – Difference in Difference with control – common trend interpretation
- Solved – Difference-in-difference and omitted variable bias
- Solved – Common trend assumption
- Solved – Difference in Difference – Does the control units need to be similiar