I'm trying to run a mixed-effects analysis on some data that I have, but cannot determine if I am using the correct model.

First, I am trying to determine if there are between-group difference in 5 independent variables. My data consists of a number of participants who performed 31 different tasks (we'll call them Items), during which these 5 variables were measured. I would like to test for differences in these variables, not necessarily in relation to each other, but I would like to keep them in one model so as to avoid excessive testing. I would, however, like to control for the variance between these items, and if possible also between participants.

I currently am using the lme4 package to run a logistic regression:

`glmer(Group~Var1+Var2+Var3+Var4+Var5 + (1|item),family=binomial) `

My thinking here is that if there are group differences, then the variables should also predict group membership, effectively testing my hypothesis, if in an indirect way. Please correct me if this assumption is incorrect.

Ideally I would like to run this with Participant as an addition random factor, but I don't think that makes sense in a model testing for group differences.

The alternative is to run separate regression models for each of my independent variables, with Group, Item, and Participant as random effects. However, as I said before, I don't want to over-test the data, so I'm not sure if this is an advisable way to go about it.

Can anyone let me know if my current setup is a valid way to test for significant differences of multiple variables between 2 groups?

EDIT:

If the above is NOT valid, and the test should be the other way around, *lmer(Var1 ~ Group + (1|item))*, is it then recommended to also model participants as random variables, or will this interfere with the fixed effect of Group?

**Contents**hide

#### Best Answer

In the model that you specified in your initial question,

`glmer(Group~Var1+Var2+Var3+Var4+Var5 + (1|item),family=binomial) `

you would be asking the following question: which variables, measured across a number of items, are independent predictors of group assignment/membership?

If this was a test to try and diagnose a pathology, then it might make sense. However, it sounds more like an experimental design with treatment assignment. In this case, each response should be tested separately. The model you left in the comments section would be correct:

`lmer(Var1 ~ Group + (1|item) + (1|participant), data=data) `

This design would be considered a crossed random effects model. You can confirm that you specified it correctly as a similar design is shown here.

Regarding multiple testing, one cannot add many terms to a model to try and avoid the multiple comparisons problem because: 1) there are no tricks to avoid multiple testing if you do in fact make multiple tests and 2) because the tests would have a different interpretation in a full model since their estimates are controlled for the values of the other variables. There are certainly situations where one would want to do this, especially in observational studies, but for experimental designs, it would be pretty unusual.

My two cents on whether you should adjust for multiple comparisons or not will depend on the context. If each response has its own pre-specified hypothesis, are frequently tested together or if some are more like positive controls (we expect them to change, but it is not the main hypothesis of interest), then you could justify using the unadjusted p-values. However, if you don't have clearly defined hypotheses or if you take "a significant result, any significant result", then you should adjust the p-values to maintain your confidence at the nominal level.

### Similar Posts:

- Solved – bi-factor cfa, multiple method factors, DWLS vs MLS in lavaan
- Solved – Design matrix contrast coding for model selection and ‘main effects’ vs. ‘simple main effects’ interpretation in linear mixed effects model (R/Matlab)
- Solved – McNemar or chi-square – test for group differences paired and unpaired data
- Solved – IRT in R: Does anyone know of an IRT item calibration function that can cope with NA’s
- Solved – How to report Cronbach’s alpha for referent-shift scale (before or after aggregation?)