在GAN的实现中,我对.trainabletf.keras.model语句感到困惑。

给定以下代码片段(取自this repo):

class GAN():

    def __init__(self):

        ...

        # Build and compile the discriminator
        self.discriminator = self.build_discriminator()
        self.discriminator.compile(loss='binary_crossentropy',
            optimizer=optimizer,
            metrics=['accuracy'])

        # Build the generator
        self.generator = self.build_generator()

        # The generator takes noise as input and generates imgs
        z = Input(shape=(self.latent_dim,))
        img = self.generator(z)

        # For the combined model we will only train the generator
        self.discriminator.trainable = False

        # The discriminator takes generated images as input and determines validity
        validity = self.discriminator(img)

        # The combined model  (stacked generator and discriminator)
        # Trains the generator to fool the discriminator
        self.combined = Model(z, validity)
        self.combined.compile(loss='binary_crossentropy', optimizer=optimizer)

    def build_generator(self):

        ...

        return Model(noise, img)

    def build_discriminator(self):

        ...

        return Model(img, validity)

    def train(self, epochs, batch_size=128, sample_interval=50):

        # Load the dataset
        (X_train, _), (_, _) = mnist.load_data()

        # Adversarial ground truths
        valid = np.ones((batch_size, 1))
        fake = np.zeros((batch_size, 1))

        for epoch in range(epochs):

            # ---------------------
            #  Train Discriminator
            # ---------------------

            # Select a random batch of images
            idx = np.random.randint(0, X_train.shape[0], batch_size)
            imgs = X_train[idx]

            noise = np.random.normal(0, 1, (batch_size, self.latent_dim))

            # Generate a batch of new images
            gen_imgs = self.generator.predict(noise)

            # Train the discriminator
            d_loss_real = self.discriminator.train_on_batch(imgs, valid)
            d_loss_fake = self.discriminator.train_on_batch(gen_imgs, fake)
            d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)

            # ---------------------
            #  Train Generator
            # ---------------------

            noise = np.random.normal(0, 1, (batch_size, self.latent_dim))

            # Train the generator (to have the discriminator label samples as valid)
            g_loss = self.combined.train_on_batch(noise, valid)



在定义模型self.combined的过程中,鉴别器的权重设置为self.discriminator.trainable = False,但从不打开。

尽管如此,在训练循环中,鉴别器的权重仍将随行而变化:

# Train the discriminator
d_loss_real = self.discriminator.train_on_batch(imgs, valid)
d_loss_fake = self.discriminator.train_on_batch(gen_imgs, fake)
d_loss = 0.5 * np.add(d_loss_real, d_loss_fake)


并在以下期间保持不变:

# Train the generator (to have the discriminator label samples as valid)
g_loss = self.combined.train_on_batch(noise, valid)


我没想到

当然,这是训练GAN的正确(迭代)方式,但是我不明白为什么我们在对鉴别器进行一些训练之前不必通过self.discriminator.trainable = True

如果有人对此做出解释,那很好,我想这是理解的关键点。

最佳答案

当您对github存储库中的代码有疑问时,通常最好检查一下问题(打开和关闭)。 This issue解释为什么将标志设置为False。它说,


  由于self.discriminator.trainable = False是在鉴别符编译后设置的,因此不会影响鉴别符的训练。但是,由于它是在组合模型编译之前设置的,因此在训练组合模型时将冻结区分层。


并且还讨论了freezing keras layers

10-06 07:06