I am currently reading the following paper which formulates the weighted linear regression in a Bayesian setting. In classic weighted LS, we minimise the following:

$$

sum_{i=1}^{N} w_i (beta^Tx_i – y_i)

$$

In this paper, they try and have a Bayesian formulation of the WLS. So, it makes the following modelling choices about the probability distributions of the random variables:

$$

y_i sim N(beta^tx_i, sigma^2/w_i)

$$

So, here we are modelling each of the $y_i$ to have variance which can be weighted by their individual weight. There is a normal prior also over the regression parameters $beta$.

$$

beta sim N(beta_0, Sigma_{beta, 0})

$$

There is a Gamma prior over the weights $w_i$.

$$

w_i sim Gamma(a_i, b_i)

$$

Now, my question is that the regression problem is basically:

$$

y_i = beta^T x_i + epsilon_i

$$

My question is why is there no prior on $epsilon$? In this paper, they estimate $sigma^2$ through some standard regression formula (Apologies as I have not gone far to derive it yet). However, to me it seems that $sigma^2$ is also an unknown parameter in the model and if we follow Bayesian statistical modelling, we should specify a prior for it.

If anyone is curious, the paper is here:

**Contents**hide

#### Best Answer

This is just a model assumption the author made. Unfortunately, there aren't standardized procedures to "follow Bayesian statistical modelling", so while you may specify a prior for variance, it isn't a requirement for a linear regression to be bayesian.

### Similar Posts:

- Solved – Beta distribution parameter estimation: method of moments
- Solved – Deriving posterior distribution with a normal likelihood and a normal-gamma prior
- Solved – bayesian logistic regression – gaussian distribution on parameters
- Solved – Calculating the parameters of a Normal distribution using alpha and beta from Inverse-gamma (conjugate prior)
- Solved – Specifying the Form of Prior, Likelihood and Posterior Distributions for Bayesian Analysis