I was wondering about generally how I should go about and run my multiple regression.

I have two independent variables:

- group (binary scored as 0 and 1)
- activity (a continuous variable)

If I run a simple regression on this, both my dependent variables are significant. (** y=a+b**)

I want to check the interaction effect between the two independent variables on my one continuous dependent variable.

What I have done in SPSS so far is simply create another term with Compute Variable, namely group * activity

Now, when I a run a regression with this interaction variable added (** y=a+b+ab**) , the main effects of group and activity are not significant anymore, as is the interaction effect.

I was wondering what the difference in interpretation was between running a model as (** y=a+b+ab**) or simply as (

**), because the last option is again significant.**

*y=ab*Could you help me with the interpretation of these tests?

**Contents**hide

#### Best Answer

When a model is fitted with only the significant main effects, $y=a+b$, this suggests that both $a$ and $b$ variable contributes to explaining the variability in $y$. And when put together, the simultaneous effect of both variable on $y$ may be either multiplicative or additive.

For example, effect of variable $a$ on $y$ alone may be $alpha$ and effect of $b$ on $y$ alone is $beta$. Having both variable $a$ and $b$ may produce a overall multiplicative effect $alphabeta$. This can be explain in a model $y=a+b+ab$. By doing so, the interpretation becomes a little tricky since the main effect cannot be interpreted alone anymore. Also, an interaction model without main effects would not make sense. The model $y=ab$ is not testing for interaction but rather has a different meaning, it will be just testing if a new variable created $p =atimes b$ is linearly associated with your $y$.

Say you have a model $y=beta_0+beta_1a+beta_2b+beta_3ab$ where $a$ is the binary variable {0,1} and $b$ is the continuous variable. The overall effect of $a$ on $y$ when $a=1$ is $hat{beta}_0+hat{beta}_1+hat{beta}_2b+hat{beta}_3b$ and when $a=0$ is $hat{beta}_0+hat{beta}_2b$. The $p$-value associated with $hat{beta}_3$ (for the interaction term) should be used to determined if interaction effect is significant or not.

So for your model, since the interaction effect is not significant, you should revert back to a model without interaction.

### Similar Posts:

- Solved – Is it meaningful to identify interaction effect when a main effect in a model (main effects only, without interaction terms) is not significant
- Solved – Logistic regression: the main effects lose significance when I add the interaction effects. Why
- Solved – Why do we not interpret main effects if interaction terms are significant in ANOVA
- Solved – Why do we not interpret main effects if interaction terms are significant in ANOVA
- Solved – Interaction Terms and Logit Models