0
$\begingroup$

As per the original paper that introduced GANs, the generator loss is given as: $$ L_{G} = L _{BCE}(\mathbf{\vec 0}, \mathbf{D}(\mathbf{G}(\mathbf{\vec z}))) = \log(1 - \mathbf{D}(\mathbf{G}(\mathbf{\vec z}))) $$ But in TensorFlow docs, in the implementation of DCGAN, the following generator loss is used:

def generator_loss(fake_output):
    return cross_entropy(tf.ones_like(fake_output), fake_output)

Isn't this a huge discrepancy from the paper? Can anyone explain why this works?

$\endgroup$
1
  • 2
    $\begingroup$ Hi @SagnikTaraphdar, welcome to the site. Maybe I'm missing something, but I fail to see the discrepancy. In the linked page, we can see that fake_output = discriminator(generated_images, training=True). Can you point in more detail where you see the problem? $\endgroup$
    – noe
    Commented May 17 at 19:50

0