Solved – Computing the true effect size from F-values and sample sizes

I'm trying to extract a bunch of effect sizes from a couple of different studies in for which the authors have performed ANOVAs on their data. More specifically, I'm interested in the main effects of a certain dichotomous variable.

Now, if I only have the F-values and the sample sizes for my two groups that I'm interested in, can I really deduct something meaningful from this? The package for R has a function called fes() (see page 45 of the manual here), for which you input the F-value and the sample sizes and get an effect size. The formula that it uses is:


However, I did some tests with different ANOVAs. I held the data constant for the two groups that I'm interested in, and added/removed other data in order to create a couple of different analyses: A One-way ANOVA, a 2×2 ANOVAs, a 2×3 ANOVA, and a 2x2x2 ANOVA. All of them gave me different F-values for the main effect of the variable I'm interested in, and subsequently fes() gave me different estimations of the effect size.

I'm not quite sure what I'm doing here. Is it ever possible to get some kind of "true" effect size (that is, the same you acquire get if you had the means and standard deviations from the two groups) from an F-value and the sample sizes? In that case, for what type of ANOVA?

In this answer, it's suggested that it's not possible, but what's the deal with the fes() function then?

I think that, instead of more traditional Cohen's $d$, you could use eta squared $eta^2$. As I'm not an expert on the topic, I will refer you to the following two IMHO excellent comprehensive papers, which discuss selection, calculation and interpretation of effect sizes in various situations and for various research designs (formulas are in the appendices) in a great detail. In particular, Lakens (2013) provides a formula for calculating partial eta squared, using F-value and degrees of freedom, for a certain subset of designs – I hope that it corresponds to your case (p. 6):

For designs with fixed factors (manipulated factors, or factors that exhaust all levels of the independent variable, such as alive vs. dead), but not for designs with measured factors or covariates, partial eta squared can be computed from the F-value and its degrees of freedom (e.g., Cohen, 1965):

$$ eta_p^2 = frac{F times df_{effect}}{F times df_{effect} + df_{error}} $$

For further relevant details and discussion, please refer to the above-mentioned papers.

UPDATE: I almost forgot to mention an interesting R package MBESS (home page), which allows calculation of effect sizes and their confidence intervals in various contexts, including ANOVA. The package is available on CRAN. Corresponding JSS paper (Kelley, 2007) is available online here.


Durlak, J. A. (2009). How to select, calculate, and interpret effect sizes. Journal of Pediatric Psychology, 34(9), 917-928. doi:10.1093/jpepsy/jsp004

Kelley, K. (2007). Confidence intervals for standardized effect sizes: Theory, application, and implementation. Journal of Statistical Software, 20(8). Retrieved from

Lakens, D. (2013). Calculating and reporting effect sizes to facilitate cumulative science: A practical primer for t-tests and ANOVAs. Frontiers in Psychology, 4(863). doi:10.3389/fpsyg.2013.00863

Similar Posts:

Rate this post

Leave a Comment