The objective function in VQ-VAE (Eq. (3) here) contains

$$leftlVert mathrm{sg}[z_e(x)] – e rightrVert^2 + leftlVert z_e(x) – mathrm{sg}[e] rightrVert^2,$$

where $mathrm{sg}$ is the stop-gradient operator.

(Note: The second term can have a weighting factor $beta$, but "the results did not vary for values of $β$ ranging from $0.1$ to $2.0$. We use $β = 0.25$", so let's assume $beta=1$.)

What are the advantages of this objective over directly optimizing

$$leftlVert z_e(x) – e rightrVert^2$$

instead?

**Contents**hide

#### Best Answer

I have been looking for the same question. I have finally deduced the following. I think it is a learning factor that balance the importance between terms (codebook loss and commitment loss).

If the Beta factor is smaller than 1, it means that the encoder is updated more faster than the codebook.

That is interesting for example if we think about it from a centroid perspective (codebook), we do not want them to update strongly in each iteration because we have to preserve some information of the previous batches (and more important if the batch is small).

In short, we want the centroids (codebook) to move slowly and the encoder samples can be updated faster. Probably this technique can minimize the noise produced by the mini-batch sampling in contrast than use all the dataset.

This is what I have deduced, if it is not correct please someone indicate it.

### Similar Posts:

- Solved – CRF or MRF energy functions for image segmentation
- Solved – Why does Group Lasso use L2 norm for individual group penalties
- Solved – How is the minimum $lambda$ computed in group LASSO
- Solved – Condition for RNN vanishing gradients and eigenvalues of the matrix of weights
- Solved – How to calculate 95% CI for OR for a different reference category without running the SAS logistics again