Does anyone have any advice as to how best perform a power calculation in order to determine what effect size a meta-analysis has the power to detect? I have performed a random effects meta-analysis and would like to be able to say this meta had 80% (alpha 0.05) power to detect an effect size of XX.

Many thanks

Rob

**Contents**hide

#### Best Answer

The whole issue of performing retrospective power calculations (not just in meta-analysis) is one which has divided opinion. Russell Lenth has provided some of the most convincing arguments against doing it at all (convincing in the sense that they convinced me). His web-pages here gives some hints. His paper entitled "Some practical guidelines for effective sample size determination" in American Statistician widely available from many other web sites gives more detail.

The basic message is that the results you got, in particular the confidence interval or significance level, give you all the information you need about the precision of your estimates.

Performing sample size determination before doing the study is, of course, not deprecated in the same way. In the field of meta-analysis the main focus has been on meta-regression rather than vanilla meta-analysis. I would have to say that I have never seen much value in this as the sample size is not under your control and even finding that there are too few studies to make sensible inferences is still a valuable piece of work to publish. But that is a personal and possibly heretical view.

### Similar Posts:

- Solved – Power of Meta Regression
- Solved – Power needed to detect an interaction
- Solved – need statistical power for AB testing if the results are significant
- Solved – Multiple regression a-priori analysis of power for sample size
- Solved – Power analysis for a factorial logistic regression without estimated proportions for each factor