The loss function of the WGAN is a continuous one. It doesn't have a convergence point. I don't really understand when we should stop the training.
Best Answer
The loss of WGAN does have a convergence point: 0. We should arrive at this point when the generator is capable of generating samples so good that no Lipschitz continuous discriminator can distinguish real from generated samples.
In fact it is a major selling point of WGAN that the loss should steadily converge in a way that informs you whether the training is making progress or not. With traditional GANs, pretty much the only way of telling if the generated samples are improving is via visual inspection, and you stop the training when the visual quality of the samples is satisfying.
Similar Posts:
- Solved – Wasserstein Loss is very sensitive to model architecture
- Solved – Wasserstein Loss is very sensitive to model architecture
- Solved – the stop criteria of generative adversarial nets
- Solved – the stop criteria of generative adversarial nets
- Solved – How to interprete Discriminator and Generator loss in WGAN