I have done a repeated-measures study with two between variables: Groups (Control vs Experimental) and Religious affiliation (Yes vs No). The task was to rate 10 stimuli (images) on a 5-point Likert scale. Half of the participants rated the control version of the stimuli and the other half rated the experimental version.

I have built a linear mixed-effects model using the lme function in R, where I included as random nested effect: Participants identity, Stimuli and Religious affiliation.

I was interested to test if there was a significant effect of my two fixed effects (Group and Religion) and if there was a significant interaction between them.

Here is the problem: when I run the anova command 'Religion' is significant, but it is not if I then run the summary of the saturated model.

Here I copy the model I built:

`baseline<-lme(Ratings ~ 1, random = ~1|PID/Religion/Stimuli, data = newDframe, method = "ML") GroupM<-update(baseline, .~. + Group) ReligionM<-update(GroupM, ~. + Religion) Group_Rel<-update(ReligionM, ~. + Group:Religion) anova(baseline, GroupM, ReligionM, Group_Rel) summary(Group_Rel) `

and here is the output:

`# Model df AIC BIC logLik Test L.Ratio p-value # baseline 1 5 1822.944 1845.093 -906.4720 # GroupM 2 6 1824.490 1851.069 -906.2451 1 vs 2 0.453769 0.5006 # ReligionM 3 7 1817.388 1848.396 -901.6940 2 vs 3 9.102179 0.0026 ** # Group_Rel 4 8 1818.704 1854.141 -901.3519 3 vs 4 0.684343 0.4081 # > summary(Group_Rel) # Linear mixed-effects model fit by maximum likelihood # Data: newDframe # AIC BIC logLik # 1818.704 1854.141 -901.3519 # # Random effects: # Formula: ~1 | PID # (Intercept) # StdDev: 0.4294171 # # Formula: ~1 | Religion %in% PID # (Intercept) # StdDev: 0.4294212 # # Formula: ~1 | PaintIDnum %in% Religion %in% PID # (Intercept) Residual # StdDev: 0.886335 0.3555525 # # Fixed effects: Ratings ~ Group + Religion + Group:Religion # Value Std.Error DF t-value p-value # (Intercept) 3.280000 0.1521495 558 21.557743 0.0000 # GroupTransformed -0.112000 0.2041300 58 -0.548670 0.5853 # Religionyes 0.483636 0.2554201 58 1.893494 0.0633 # GroupTransformed:Religionyes 0.331697 0.4011532 58 0.826859 0.4117 `

I know that if I have N.S. results (p > .05) in the anova output and significant contrasts (p < .05) running the summary, I have just report that there wasn’t a significant main effect and ignore the summary output (even if significant, as the higher order effect is not). But can’t find an answer when it is the other way around.

What shall I report when describing the results of my analysis?

Many thanks for your help!

**Contents**hide

#### Best Answer

This can happen when you build your model sequentially as you did. in this case, you should report the results of the AIC/LRT analyses (anova(baseline, GroupM, ReligionM, Group_Rel)) and then build a simple model using only 'significant' variables from that initial analysis from which to make inference about parameter values.

`ReligionM2 <-lme(Ratings ~ Religion, random = ~1|PID/Religion/Stimuli, data = newDframe, method = "ML) summary(ReligionM2) `

That should reconcile the P-values from the overall model comparison approach and the P-values for the individual parameters.

It's interesting that you have religion as both a fixed effect and a random effect.

### Similar Posts:

- Solved – How to determine random effects in mixed model
- Solved – Hypothesis testing with control and treatment group – which statistical analysis to use
- Solved – Hypothesis testing with control and treatment group – which statistical analysis to use
- Solved – Hypothesis testing with control and treatment group – which statistical analysis to use
- Solved – Hypothesis testing with control and treatment group – which statistical analysis to use