I know there are already lots of questions around this topic (especially this one and this one) but I haven't really seen anything that directly helps me (It will be obvious I'm not a great statistician, but I'll do my best to explain).

I am running an ordinal regression in R (`clm`

and `clmm`

). My response variable is a rating between 0 and 4. I have two types of explanatory variables: individual and scenario variables [let's say `IVs`

and `SVs`

].

Six different scenario variables (all dummies with at most 4 different values) represent potential collaboration scenarios that get rated by the respondent (between 0 and 4) creating the response variable. (Research design is a conjoint analysis; there are a total of 192 different scenarios possible)

On top of that I have a variety of individual characteristics about the respondent (age, gender, work experience, networking skills, …) all derived from a survey.

Every respondent rates between 3 and 16 different scenarios (average 8.1); every scenario is rated by at least 8 respondents. Every respondent and every scenario have a unique identifier (called `IVid`

and `SVid`

). So they are non nested within each other.

Thus the basic regression looks like this:

`clm.base <- clm(rating ~ SVs + IVs, data = dt) `

The hypothesis I am trying to test is that there are specific individual characteristics, that will influence the rating of the scenarios, independent of the actual content of the scenarios. Basically, some people are more or less favourable to all types of collaboration scenarios.

Now a reviewer of my paper asks me to include individual fixed effects (which in management [my field] basically means dummies for each individual). My assumption originally was that this would result in all individual variables being dropped. This is exactly what happens when I use another model (package `lfe`

)

`felm.complete <- felm(rating ~ SVs + IVs | SVid + IVid | 0 | IVid, data = dt) `

In this regression basically all my variables are perfectly collinear as expected.

However, when I approximate this in the ordinal package, there is no perfect collinearity. I presume this is related that `clmm`

adds so-called 'random effects'. The regression takes a couple of minutes to run but eventually returns results

`clmm.complete <- clmm(rating ~ SVs + IVs + (1|SVid) + (1|IVid), data = dt) `

Now, the results here are pretty useless:

- All but one of my IVs are insignificant

I am trying to understand what exactly happens when adding the `(1|IVid)`

term in the `clmm`

model. If it basically adds something like an individual dummy than the fact almost everything is now insignificant is no surprise. The coefficients of the `IVid`

dummies would capture the effect I am looking for (some people rate all scenarios higher or lower, regardless of scenario content) most accurately.

Now I wonder whether this interpretation is correct or whether the results I got from running the simple `clm`

regression are just not reliable?

Concretely, I'd like to find out:

- What happens when adding a random effect to
`clmm`

- A laymen explanation of how the Laplace approximation works
- How to group errors around individuals when running
`clm`

- Is it possible to extract the coefficients of these random effects
`(1|id)`

for as far as there is such a thing?

**Contents**hide

#### Best Answer

I can't answer all of your questions, but this is what I know for sure.

Adding a random effect to clmm has much the same impact as adding a random term to a logistic regression model. Take a simple case with ordinal response $Y$, continuous covariate $X$ and grouping variable $G$. Write $p_k=Prob(Y leq k)$, for some level of the ordinal response. The model posits that for subject $i$ in group $j$, $$log frac{p_k}{1-p_k}=theta_k – beta x_{ij} – g_j$$

Takeaways:

- We are modeling a chain of logistic regressions where the binary response is: "less than or equal to a certain level" vs "greater than that level".
- the $theta$ parameter depends only on the ordinal response. It's like an intercept term, although there is one of these for each level.
- $beta$ is the slope parameter for the fixed effect $x$, and it does not depend on $k$. The impact of $x$ applies proportionally to all levels of the response.
- $g_j$ is an unseen, grouping effect. clmm will estimate the value of $g_j$ and the variance parameter of its (assumed) normal distribution.
- The minus signs are a convention of clm and clmm. In interpreting the parameters, negative values imply a greater probability for low ranking responses and positive values imply that the odds favour higher levels of the response. I do not believe that this convention is universal. SPSS, I'm looking at you.

In this simple model, the random effect increases or decreases the odds ratio by a factor of $e^{-g_j}$. This factor is independent of the level itself. Say, if $e^{-g_j}=0.5$ in some group, then all odds ratios are cut by a half for members of that group. This assumption is the weak link of the proportional odds model.

You could include random slopes as well, if you believe there is an interaction between the group and covariate $x$.

You can extract the random effects that the model estimates using method `ranef`

, as in `ranef(clmm.complete)`

.

Laplace approximation is a method of numerical integration. Random effects models involve an iterative procedure to estimate the random effects and the process parameters, the EM algorithm. I believe that's where the Laplace approximation comes in, since during the estimation phase, you have to integrate over the unknown parameters. However, I don't know the details of the algorithm employed by clmm. Perhaps someone else can give a better answer.

Be aware that clmm requires an optimization to produce estimates. As part of the summary function, you get some information about whether or not the optimization succeeded. The gradient should be near zero and the Hessian index should be less than 10000.

Regarding the specifics of your model, I don't quite understand the difference between variable SV and variable SVid. In any case, adding `(1|SVid)`

basically changes the odds ratio for each group indexed by SVid. The model has to estimate one parameter for each SV group, a variance and possibly some covariance terms with other parameter estimates. Your description of the study suggests a lot of groups and not much data in each, so that could lead to problems with instability and convergence of the model.

I'm not sure what you mean by "grouping errors around individuals".

### Similar Posts:

- Solved – Why CLMM function for ordinal mixed logistic regression changes the means
- Solved – CLMM Output interpretation from R Ordinal
- Solved – Residual pattern in mixed model
- Solved – How to analyze ordinal repeated measures data
- Solved – Logistic regression versus t-test, which is better for this comparative-descriptive study