As per the original paper that introduced GANs, the generator loss is given as: $$ L_{G} = L _{BCE}(\mathbf{\vec 0}, \mathbf{D}(\mathbf{G}(\mathbf{\vec z}))) = \log(1 - \mathbf{D}(\mathbf{G}(\mathbf{\vec z}))) $$ But in TensorFlow docs, in the implementation of DCGAN, the following generator loss is used:
def generator_loss(fake_output):
return cross_entropy(tf.ones_like(fake_output), fake_output)
Isn't this a huge discrepancy from the paper? Can anyone explain why this works?
fake_output = discriminator(generated_images, training=True)
. Can you point in more detail where you see the problem? $\endgroup$