In a linear model $Y=Xbeta + varepsilon$, one can easily test linear hypotheses of the form $H_0: Cbeta = gamma, $ where $C$ is a matrix and $gamma$ is a vector with dimension equal to the number of rows in $C$. Namely, one can derive a test statistic which has an F distribution under the null and go from there.

Theoretically, these tests are very interesting to me and seem quite flexible, as $C$ and $gamma$ can be anything.

**However, I'm interested to know how useful these hypotheses are in practical applications and what are some interesting examples of these applications?** (besides testing if a single coefficient is 0 or all coefficients of the model being zero, which is included in every `lm`

call in R for example)

**Contents**hide

#### Best Answer

These linear hypotheses on the coefficient vector have three main uses:

**Testing the existence of relationships:**We can test the existence of relationships between some subset of the explanatory variables and the response variable. To do this, let $mathbf{e}_mathcal{S}$ denote the indicator vector for the subset $mathcal{S}$ and test the linear hypotheses:

$$H_0: mathbf{e}_mathcal{S} boldsymbol{beta} = 0 quad quad quad H_A: mathbf{e}_mathcal{S} boldsymbol{beta} neq 0.$$

**Testing a specified magnitude for the relationship:**We can test the magnitude of a relationship between an explanatory variables and the response variable using some specified value of interest. This is often useful when a particular specified magnitude has some practical significance (e.g., it is often useful to test if the true coefficient is equal to one). To test $beta_k = b$ we use the linear hypotheses:

$$H_0: mathbf{e}_k boldsymbol{beta} = b quad quad quad H_A: mathbf{e}_k boldsymbol{beta} neq b.$$

**Testing the expected responses of new explanatory variables:**We can test the values expected values of responses corresponding to a new set of explanatory variables. Taking new explanatory data $boldsymbol{X}_text{new}$ we get corresponding expected values $mathbb{E}(boldsymbol{Y}_text{new}) = boldsymbol{X}_text{new} boldsymbol{beta}$. This means that we can test the hypothesis $mathbb{E}(boldsymbol{Y}_text{new}) = boldsymbol{y} $ via the hypotheses:

$$H_0: boldsymbol{X}_text{new} boldsymbol{beta} = boldsymbol{y} quad quad quad H_A: boldsymbol{X}_text{new} boldsymbol{beta} neq boldsymbol{y}.$$

As you can see, the first use is to test for whether some of the coefficients are zero, which is a test of whether those explanatory variables are related to the response in the model. However, you can also undertake more general tests of a specific magnitude for the relationship. You can also use the linear test to test the expected response for new data.

### Similar Posts:

- Solved – Sufficient statistic for bivariate or multivariate normal
- Solved – In linear regression, are the noise terms independent of the coefficient estimators
- Solved – Hessian matrix for maximum likelihood
- Solved – Is it correct to say one ‘estimates’ or ‘measures’ r-squared
- Solved – What are the parameters of a Wishart-Wishart posterior