When we do the standard difference in difference to find the causal impact of a policy, do we assume that the assignment to treatment and control groups was random?

In other words, if my program was specifically targeted to a particular group (say we want to know the effect of building schools on education, but more schools were placed in regions with lower school enrollment rate), then would that standard DID account for this endogeneity?

In particular, in the OLS of outcome variable on time, group and time x group dummies, does the group fixed effect take care of this systematic difference between treatment and control?

**Contents**hide

#### Best Answer

The central assumption in DID estimation is that the trends in the outcome variable would have been parallel in the treated and control groups if there had been no treatment. A common way of checking if this assumption seems plausible is to see if the trends were parallel before the intervention. It isn't necessary to have random assignment for this assumption to hold, but it is much more likely to fail if assignment was based on some characteristics of the groups.

To be clear, the group fixed effects take care of any time-invariant factors that differ between groups – this is why we need parallell, but not identical, trends. Time fixed effects remove parallel trends, but if trends differ, you get biased results.

In your setting, I would be a bit worried about systematic differences between treated and control groups. If you have many groups, you can try adding group-specific linear time trends. This allows for differing trends between groups, as long as the treatment does not shift the slope. But an even better approach would be the synthetic control groups method of Abadie et al. The basic idea is to create a synthetic control group that is a weighted average of a number of potential control groups, where the weights are chosen so that the synthetic control group mimics the treatment group in terms of pre-intervention trend and pre-determined covariates as closely as possible. See these references for details:

Abadie, Alberto, Alexis Diamond, and Jens Hainmueller. “Comparative Politics and the Synthetic Control Method.” American Journal of Political Science, no. Forthcoming (2014). doi:10.1111/ajps.12116.

———. “Synthetic Control Methods for Comparative Case Studies: Estimating the Effect of California’s Tobacco Control Program.” Journal of the American Statistical Association 105, no. 490 (June 2010): 493–505. doi:10.1198/jasa.2009.ap08746.

Abadie, Alberto, and Javier Gardeazabal. “The Economic Costs of Conflict: A Case Study of the Basque Country.” The American Economic Review 93, no. 1 (March 1, 2003): 113–32.