Challenge - 5 Problems
GAN Mastery Badge
Get all challenges correct to earn this badge!
Test your skills under time pressure!
❓ Predict Output
intermediate2:00remaining
Output of discriminator loss calculation in GAN training loop
Consider the following snippet from a GAN training loop where the discriminator loss is calculated. What is the output of
print(d_loss.item()) after one backward pass?PyTorch
import torch import torch.nn as nn criterion = nn.BCELoss() d_real = torch.tensor([0.9], requires_grad=True) d_fake = torch.tensor([0.1], requires_grad=True) real_labels = torch.ones_like(d_real) fake_labels = torch.zeros_like(d_fake) loss_real = criterion(d_real, real_labels) loss_fake = criterion(d_fake, fake_labels) d_loss = loss_real + loss_fake d_loss.backward() print(round(d_loss.item(), 3))
Attempts:
2 left
💡 Hint
Recall that BCELoss for a prediction close to the true label is low, and for wrong predictions is high.
✗ Incorrect
The BCELoss for d_real=0.9 with label 1 is about 0.105, and for d_fake=0.1 with label 0 is about 0.105. Their sum is approximately 0.211.
❓ Model Choice
intermediate1:30remaining
Choosing the correct optimizer update step in GAN training
In a GAN training loop, after computing the discriminator loss and calling
d_loss.backward(), which of the following is the correct next step to update the discriminator's weights?Attempts:
2 left
💡 Hint
Think about which optimizer corresponds to the discriminator.
✗ Incorrect
After backward(), calling optimizer_d.step() updates the discriminator weights. zero_grad() clears gradients and should be called before backward().
❓ Hyperparameter
advanced2:00remaining
Effect of learning rate on GAN training stability
Which of the following learning rate choices is most likely to cause unstable GAN training with mode collapse?
Attempts:
2 left
💡 Hint
High learning rates can cause large weight updates and instability.
✗ Incorrect
A learning rate of 0.1 is too high for GANs and often causes unstable training and mode collapse.
🔧 Debug
advanced2:00remaining
Identifying the bug in GAN generator training step
In the generator training step below, which line causes the generator to not learn properly?
PyTorch
optimizer_g.zero_grad()
fake_data = generator(noise)
d_fake = discriminator(fake_data)
g_loss = criterion(d_fake, real_labels)
g_loss.backward()
optimizer_d.step() # <-- suspicious line
Attempts:
2 left
💡 Hint
Check which optimizer should update the generator weights.
✗ Incorrect
Calling optimizer_d.step() updates discriminator weights instead of generator weights, so generator does not learn.
🧠 Conceptual
expert2:30remaining
Why alternate training of discriminator and generator is important in GANs
Why do GAN training loops alternate between updating the discriminator and the generator instead of updating both simultaneously?
Attempts:
2 left
💡 Hint
Think about the adversarial nature of GANs and training stability.
✗ Incorrect
Alternating updates keep discriminator and generator balanced, preventing one from dominating and causing unstable training.