Solved – Deriving effect sizes form pre-post treatment design in R

This question is related to my previous post.

In case of a meta-analysis of a pre-post treatment design I have data given from 5 studies that tested subjects pre- and post-treatment on a continuous scale (means and standard deviation are available pre- and post-treatment). Moreover every study has tested 4 subgroups.
Here is some dummy data:

set.seed(123) a <- c(rep("study1",4), rep("study2",4), rep("study3",4), rep("study4",4),     rep("study5",4), rep("study6",4)) b <- rep(c("group1", "group2", "group3", "group4"),6) my_data <-,b)) names(my_data) <- c("study", "group") my_data$n <- round(rnorm(24,100,20),0) my_data$mean_pre <- rep(c(1,2,3,4),6) + rnorm(24,0,0.5) my_data$mean_post <- rep(c(1,2,3,4),6)*2 + rnorm(24,0,0.5) my_data$var_pre <- rep(1, 24) + rnorm(24,0,0.25) my_data$var_post <- rep(1, 24) + rnorm(24,0,0.25) 

My 1st questions is:

  • How can I calculate an effect size of treatment (and variance of it) for every study to then conduct a meta-analysis? Is the following approach valid:

Following Cooper's Book "Handbook of research synthesis and meta-analysis" (page 227) I use a formula:
Here $Y_{1}$ and $Y_{2}$ are the means for pre- and post-scores in each group of each study and $S_{within}$ is defined as:
Here $cor(Y_{1},Y_{2})$ is the correlation between pre and post-scores. However $S_{difference}$ is not available from the studies included in the meta-analysis. So I calculate it by the formula for the variance of the difference of two correlate random variables from here:
i can put the two formulas together:

My 2nd questions is:

  • Pre- and post-scores in the individual studies have been measured on different scales. Would there be a way (e.g. z-transform) to transform the data so that I would be able to check separately for group differences pre- and group differences post-treatment?

Regarding Question 1

You are currently doing a lot of extra stuff that is not needed. What you are trying to calculate is the standardized mean change, where you are standardizing by the raw score standard deviation. This is usually calculate with:

$d = (bar{Y}_1 – bar{Y}_2) / SD_1$,

where $SD_1$ is the standard deviation of the pretreatment scores (one could instead use $SD_2$). Since you mentioned that you have the pre and post-treatment means and SDs, you should be able to calculate this easily.

What is more tricky is calculating the sampling variance for d. The usual equation to compute (or rather: estimate) the sampling variance is:

$v = 2(1-r)/n + d^2/(2n)$,

where $r$ is the pre-post correlation, and $n$ is the sample size of the group. The tricky thing is that $r$ is typically not reported. In that case, you could try to "guestimate" $r$ and then do a sensitivity analysis (i.e., you check whether the conclusions of your meta-analysis depend on which value(s) you plug in for $r$).

Regarding Question 2

If the scales differ between the pre- and the post-test, then the standardized mean change (or even just the raw mean change) is not very meaningful. In that case, I don't think that you can really estimate the change — since that would require that the same thing has been measured before and after the treatment. So, in that case, you could do separate meta-analyses of the pre-treatment means and of the post-treatment means. I am assuming here that the units of the pre-treatment means are the same across studies (i.e., the same measurement instrument has been used across all studies for the pre-treatment assessment) and the same thing for the post-treatment means (i.e., the same measurement instrument has been used across all studies for the post-treatment assessment).

Similar Posts:

Rate this post

Leave a Comment