In a Generative Adversarial Network (GAN), what is the main job of the discriminator?
Think about which part of the GAN decides if an image looks real or fake.
The discriminator's job is to tell apart real images from fake ones created by the generator. It acts like a judge.
Given the following generator model snippet, what is the shape of the output images?
import tensorflow as tf from tensorflow.keras import layers def build_generator(): model = tf.keras.Sequential([ layers.Dense(7*7*128, use_bias=False, input_shape=(100,)), layers.BatchNormalization(), layers.LeakyReLU(), layers.Reshape((7, 7, 128)), layers.Conv2DTranspose(64, (5,5), strides=(2,2), padding='same', use_bias=False), layers.BatchNormalization(), layers.LeakyReLU(), layers.Conv2DTranspose(1, (5,5), strides=(2,2), padding='same', use_bias=False, activation='tanh') ]) return model gen = build_generator() output_shape = gen.output_shape
Check the effect of Conv2DTranspose layers with stride 2 and padding 'same' on spatial dimensions.
The generator starts with a 7x7 feature map, then upsamples twice by factor 2, resulting in 28x28 output with 1 channel.
Which learning rate is most appropriate to start training a GAN to avoid unstable training?
GANs are sensitive to learning rates; too high causes instability.
Typical GAN training uses a small learning rate like 0.0002 to keep training stable and avoid mode collapse.
What does a higher Inception Score indicate when evaluating GAN-generated images?
Inception Score measures both quality and variety of generated images.
A higher Inception Score means the images look realistic and cover many different classes or styles, showing diversity and quality.
During GAN training, the generator produces very similar images repeatedly, losing diversity. What is the most likely cause?
Mode collapse happens when the generator finds a small set of outputs that fool the discriminator easily.
If the discriminator is weak, the generator can cheat by producing limited outputs that fool it, causing mode collapse.