In a Generative Adversarial Network (GAN), what is the primary function of the generator model?
Think about which part of the GAN creates new images.
The generator's job is to produce new images that look like real data to fool the discriminator into thinking they are real.
Consider the following PyTorch code snippet for a generator outputting images. What is the shape of the generated images tensor?
import torch import torch.nn as nn class SimpleGenerator(nn.Module): def __init__(self): super().__init__() self.fc = nn.Linear(100, 256*8*8) self.conv = nn.Sequential( nn.ConvTranspose2d(256, 128, 4, 2, 1), nn.ReLU(), nn.ConvTranspose2d(128, 3, 4, 2, 1), nn.Tanh() ) def forward(self, z): x = self.fc(z) x = x.view(-1, 256, 8, 8) x = self.conv(x) return x g = SimpleGenerator() z = torch.randn(16, 100) output = g(z) output.shape
Check how the ConvTranspose2d layers change the spatial dimensions.
The first ConvTranspose2d doubles 8x8 to 16x16, the second doubles 16x16 to 32x32, outputting 3 channels.
Which statement best describes the effect of increasing the size of the latent vector (noise input) in a GAN generator?
Think about the trade-off between diversity and training complexity.
A larger latent space allows more variety in generated images but can make the model harder to train well.
What does a higher Inception Score (IS) indicate when evaluating generated images?
IS combines image quality and variety in its score.
A higher IS means the images are both diverse and recognizable as real by a pretrained classifier.
During GAN training, the generator produces very similar images repeatedly, showing mode collapse. Which change is most likely to help reduce this problem?
Think about techniques that stabilize GAN training and encourage diversity.
Adding noise and label smoothing helps the discriminator not become too confident, encouraging the generator to explore more diverse outputs.