I have a data set containing three groups (A, B, C). Group A and Group B have undergone two different types of training designed to improve the same thing. Group C is my control group with no training. I have specific hypotheses stating that:

- Group A will be better than group C
- Group B will be better than group C and
- Group B will be better than group group A.

This is because the training used in group A has already been shown to improve over control (so I am trying to replicate that effect), while group B is a new training method which I expect to exceed both groups A and C.

How would I go about conducting this analysis? I have conducted a one-way ANOVA with planned comparisons, but I realized that my comparisons would have to be non-orthogonal since comparing B to A and C in one contrast would be pooling variance from an experimental and control condition together. Optimally, I want these comparisons:

` A B C 1 0 -1 0 1 -1 -1 1 0 `

What is the proper way to conduct such comparisons?

**Contents**hide

#### Best Answer

You cannot test all 3 pairwise comparisons within a single model because it will always be the case that one of the codes is a perfect linear combination of the other 2 codes. For example, in the codes you wrote, if we call the rows/contrasts $C_1$, $C_2$ and $C_3$ (from top row to bottom row), notice that $C_1 = C_2 – C_3$.

On a more intuitive level, we know that something would have to be wrong with a model that only made predictions based on the 3 pairwise differences, because for any given set of 3 pairwise differences, there are infinitely many sets of 3 group means that would produce those 3 group differences. For example, if the differences are, $bar{A}-bar{B}=1$, $bar{B}-bar{C}=1$, and $bar{A}-bar{C}=2$, then the group means could be $bar{A}=2,bar{B}=1,bar{C}=0$, or $bar{A}=3,bar{B}=2,bar{C}=1$, or $…$.

Probably the easiest way to test all pairwise difference is just to fit 2 separate models, one which compares A to B and A to C (e.g., dummy codes with A as the reference category), and then a separate model that has a code comparing B to C.

As for correcting $alpha$, for tests of all pairwise differences among a set of means, a conventional method is to use Tukey's Honestly Significant Difference (HSD) procedure. Many resources on doing this computation exist; the treatment in this document looks helpful as it shows the formula for the test statistic in the unequal sample sizes case. Tables of critical values for the test statistic can be found in various places, e.g. here.

### Similar Posts:

- Solved – Non-orthogonal planned comparisons: How to correct alpha
- Solved – Non-orthogonal planned comparisons: How to correct alpha
- Solved – Dunn’s test: is it appropriate to use p-values to interpret relative between group similarity
- Solved – What are the drawbacks and benefits for the Bonferroni Adjustment, Fisher’s LSD, and Tukey’s HSD procedures
- Solved – Do you need to adjust p-values after Tukey’s HSD? How to do this correctly