I have a question on estimating a difference in differences model using Stata. As I understand this, also from other questions, when there are no covariates, estimating the diff in diff using a regular regression (including dummy for year of treatment, dummy for treatment, and interaction) gives the same results as estimating it using a fixed effect command such as Stata's xtreg. It actually is so when I do this with my data, but the standard errors are completely different: when is use Stata's command "reg" i get absolutely no significance, when I use xtreg I get instead a t-statistic of more than 2, with standard errors being almost 4 times smaller. Why is it so? And what does it suggest about the validity of the model and the command to use? What would be best to do when I am also adding covariates later?
Edit: I try to add an example from the code:
gen y07=1 if year==2017 replace jump=0 if jump!=1 gen did=y07*treat xtset id year xtreg y y07 did, fe r Fixed-effects (within) regression Number of obs = 4,568 Group variable: id Number of groups = 2,284 R-sq: Obs per group: within = 0.0131 min = 2 between = 0.0008 avg = 2.0 overall = 0.0011 max = 2 F(2,2283) = 12.73 corr(u_i, Xb) = 0.0069 Prob > F = 0.0000 (Std. Err. adjusted for 2,284 clusters in id) ------------------------------------------------------------------------------ | Robust y | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- y07 | .5117687 .1409194 3.63 0.000 .2354253 .7881121 did | .8282564 .4076776 2.03 0.042 .0287991 1.627714 _cons | 8.272329 .0809889 102.14 0.000 8.11351 8.431149 -------------+---------------------------------------------------------------- sigma_u | 18.188562 sigma_e | 5.4737922 rho | .91695247 (fraction of variance due to u_i) reg y treat y07 did, r Linear regression Number of obs = 4,568 F(3, 4564) = 1.80 Prob > F = 0.1441 R-squared = 0.0013 Root MSE = 18.597 ------------------------------------------------------------------------------ | Robust y | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- treat | .6513042 .7340444 0.89 0.375 -.7877781 2.090386 y07 | .5117687 .6775766 0.76 0.450 -.8166093 1.840147 did | .8282564 1.161073 0.71 0.476 -1.448009 3.104522 _cons | 8.045057 .4404064 18.27 0.000 7.181647 8.908467 ------------------------------------------------------------------------------ ` ` ` Of course I was imprecise in saying the standard error was four times smaller, it's slightly less than tree, but it's the same thing. Of course year the variable "treat" denotes being assigned to the treatment group.
Best Answer
You need to compare apples to apples, so use clustering with OLS and clustering with xtreg, fe
(or robust with xtreg, fe
, which will default to clustering as Thomas pointed out). These coefficient equivalences are limited to two-period (one pre, and one post) datasets with treatment at the same time for all treated units.
Here's an example of 2×2 DID on a public dataset demonstrating this. Here NJ restaurants are treated (become subject to the minimum wage increase) and PA restaurant are not. February '92 (t=0) is pre and November '92 is post (t=1). The DID parameter is the interaction of t = 1 and NJ = 1. The outcome fte is full-time equivalent employees. Here I will balance the panel in order to get xtreg, fe
and OLS to give the same coefficient estimates. If the panel is unbalanced (consists of repeated cross-sections), xtreg, fe
will drop some observations that appear in only one year and the estimates will no longer match OLS or manual calculations. You may want to stick with clustered OLS if you have a repeated cross-section.
Here is the result. Note that you can use factor variable notation to create the interactions rather than hard coding them.
. use http://fmwww.bc.edu/repec/bocode/c/CardKrueger1994.dta, clear (Dataset from Card&Krueger (1994)) . drop if id == 407 // duplicate restaurant (4 observations deleted) . drop if missing(fte, treated, t, id) (19 observations deleted) . bysort id: keep if _N==2 // balance the panel (19 observations deleted) . xtset id t panel variable: id (strongly balanced) time variable: t, 0 to 1 delta: 1 unit . . /* calculate DID by hand */ . table treated t, c(mean fte N fte) row col ---------------------------------------- New | Jersey = | 1; | Pennsylva | Feb. 1992 = 0; Nov. 1992 = 1 nia = 0 | 0 1 Total ----------+----------------------------- PA | 20.17333 17.65 18.91167 | 75 75 150 | NJ | 17.06927 17.51831 17.29379 | 314 314 628 | Total | 17.66774 17.5437 17.60572 | 389 389 778 ---------------------------------------- . di %9.3f (17.51831 - 17.06927) - (17.65 - 20.17333) 2.972 . . /* fit models */ . eststo ols_robust: qui reg fte i.treated##i.t, robust . eststo xtreg_robust: qui xtreg fte i.treated##i.t, fe robust . eststo xtreg_clust: qui xtreg fte i.treated##i.t, fe cluster(id) . eststo ols_clust: qui reg fte i.treated##i.t, cluster(id) . . capture ssc install estout . esttab *, se(%9.7f) noomitted drop(0.treated 0.t 0.treated#0.t) modelwidt(15) mtitles label varwidth(35) --------------------------------------------------------------------------------------------------------------- (1) (2) (3) (4) ols_robust xtreg_robust xtreg_clust ols_clust --------------------------------------------------------------------------------------------------------------- NJ -3.104* -3.104* (1.4475664) (1.4484988) Feb. 1992 = 0; Nov. 1992 = 1=1 -2.523 -2.523* -2.523* -2.523* (1.6371048) (1.2498119) (1.2498119) (1.2506190) NJ # Feb. 1992 = 0; Nov. 1992 = 1=1 2.972 2.972* 2.972* 2.972* (1.7822146) (1.3337493) (1.3337493) (1.3346107) Constant 20.17*** 17.67*** 17.67*** 20.17*** (1.3591695) (0.2232501) (0.2232501) (1.3600450) --------------------------------------------------------------------------------------------------------------- Observations 778 778 778 778 --------------------------------------------------------------------------------------------------------------- Standard errors in parentheses * p<0.05, ** p<0.01, *** p<0.001
Clustering in DID settings is a good idea for reasons outlined in Bertrand, Duflo, and Mullainathan's 2004 QJE paper. Clustering at the level of treatment is also a good idea, but here that is not feasible since there are not enough clusters (since treatment is a state law and we have data from two states only) for that to work well. Generally your SEs will go up when you cluster in DID, but if the errors are negatively correlated within cluster, they might shrink. See this post for the reasons why.
Code:
estimates clear cls use http://fmwww.bc.edu/repec/bocode/c/CardKrueger1994.dta, clear drop if id == 407 // duplicate restaurant drop if missing(fte, treated, t, id) bysort id: keep if _N==2 // balance the panel xtset id t /* calculate DID by hand */ table treated t, c(mean fte N fte) row col di %9.3f (17.51831 - 17.06927) - (17.65 - 20.17333) /* fit models */ eststo ols_robust: qui reg fte i.treated##i.t, robust eststo xtreg_robust: qui xtreg fte i.treated##i.t, fe robust eststo xtreg_clust: qui xtreg fte i.treated##i.t, fe cluster(id) eststo ols_clust: qui reg fte i.treated##i.t, cluster(id) capture ssc install estout esttab *, se(%9.7f) noomitted drop(0.treated 0.t 0.treated#0.t) modelwidt(15) mtitles label varwidth(35)