If you choose to analyse a pre-post treatment-control design with a continuous dependent variable using a mixed ANOVA, there are various ways of quantifying the effect of being in the treatment group.
The interaction effect is one main option.
In general, I particularly like Cohen's d type measures (i.e., ${frac{mu_1 – mu_2}{sigma}}$). I don't like variance explained measures because results vary based on irrelevant factors such as relative sample sizes of groups.
Thus, I was thinking I could quantify the effect as follows
- $Deltamu_c = mu_{c2} – mu_{c1}$
- $Deltamu_t = mu_{t2} – mu_{t1}$
- Thus, the effect size could be defined as $frac{Deltamu_t – Deltamu_c}{sigma}$
where $c$ refers to control, $t$ to treatment, and 1 and 2 to pre and post respectively.
$sigma$ could be the pooled standard deviation at time 1.
Questions:
- Is it appropriate to label this effect size measure
d
? - Does this approach seem reasonable?
- What is standard practice for effect size measures for such designs?
Best Answer
Yes, what you are suggesting is exactly what has been suggested in the literature. See, for example: Morris, S. B. (2008). Estimating effect sizes from pretest-posttest-control group designs. Organizational Research Methods, 11(2), 364-386 (link, but unfortunately, no free access). The article also describes different methods for estimating this effect size measure. You can use the letter "d" to denote the effect size, but you should definitely provide an explanation of what you calculated (otherwise, readers will probably assume that you calculated the standardized mean difference only for the post-test scores).
Similar Posts:
- Solved – Confidence of Effect Size Differences
- Solved – Calculating Cohen’s d for pre-post/treatment-control design where treatment-control is also a repeated/within factor
- Solved – What descriptive statistics are not effect sizes
- Solved – What descriptive statistics are not effect sizes
- Solved – Power for experimental design