Is there really a difference in result if I use a GLMM with Gamma distribution vs. a model with a Gaussian distribution with log transformation? If so, how do I choose between the two methods?

See predict.merMod in R is acting strangely (poorly) when using new levels with glmer models for an issue with predicting values from a Gamma model that seems to be fixed using a log-transformed model.

Any thoughts?

**Contents**hide

#### Best Answer

There is a huge difference between these models. One need not even appeal to the issue of mixed modeling, consider the GLM case only.

GLMs generalize linear regression by allowing you to specify a link function and variance structure. A log link with Gaussian errors leads to one regression model. The natural link for Gamma regression is the inverse link, not the log. One can specify an inverse link, but the Gamma probability model for outcomes still has a different variance structure than normal probability models. See `?Gamma`

and the mean-variance relationship function `variance`

definition is `variance(mu) = mu^2`

. Also see `?glm`

, `?family`

, and here.

Now turn attention to the mixed model. A very important consideration is the issue of collapsibility. A log link is collapsible. An inverse link is not. This is important because mixed models are said to model the "conditional effect" or the fixed effects are conditional upon changes within the clustering unit whereas a marginal model averages up over those differences. Certain links model these effects in such a way that additive random intercepts do not lead to different effect estimates. See some simulation and discussion here as well as Judea Pearl's book "Causality".

### Similar Posts:

- Solved – How to decide which family of variance/link functions to use in a generalized linear model
- Solved – Log Inverse Gamma Distribution
- Solved – Link function in a Gamma-distribution GLM
- Solved – Link function in a Gamma-distribution GLM
- Solved – How to choose the family in Generalized Linear Model in R