0
0
PyTorchml~20 mins

GAN training loop in PyTorch - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
GAN Mastery Badge
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
Output of discriminator loss calculation in GAN training loop
Consider the following snippet from a GAN training loop where the discriminator loss is calculated. What is the output of print(d_loss.item()) after one backward pass?
PyTorch
import torch
import torch.nn as nn

criterion = nn.BCELoss()
d_real = torch.tensor([0.9], requires_grad=True)
d_fake = torch.tensor([0.1], requires_grad=True)
real_labels = torch.ones_like(d_real)
fake_labels = torch.zeros_like(d_fake)

loss_real = criterion(d_real, real_labels)
loss_fake = criterion(d_fake, fake_labels)
d_loss = loss_real + loss_fake
d_loss.backward()
print(round(d_loss.item(), 3))
A0.211
B0.000
C0.693
D0.105
Attempts:
2 left
💡 Hint
Recall that BCELoss for a prediction close to the true label is low, and for wrong predictions is high.
Model Choice
intermediate
1:30remaining
Choosing the correct optimizer update step in GAN training
In a GAN training loop, after computing the discriminator loss and calling d_loss.backward(), which of the following is the correct next step to update the discriminator's weights?
Aloss.backward()
Boptimizer_d.zero_grad()
Coptimizer_d.step()
Doptimizer_g.step()
Attempts:
2 left
💡 Hint
Think about which optimizer corresponds to the discriminator.
Hyperparameter
advanced
2:00remaining
Effect of learning rate on GAN training stability
Which of the following learning rate choices is most likely to cause unstable GAN training with mode collapse?
ALearning rate = 0.0005
BLearning rate = 0.0002
CLearning rate = 0.00005
DLearning rate = 0.1
Attempts:
2 left
💡 Hint
High learning rates can cause large weight updates and instability.
🔧 Debug
advanced
2:00remaining
Identifying the bug in GAN generator training step
In the generator training step below, which line causes the generator to not learn properly?
PyTorch
optimizer_g.zero_grad()
fake_data = generator(noise)
d_fake = discriminator(fake_data)
g_loss = criterion(d_fake, real_labels)
g_loss.backward()
optimizer_d.step()  # <-- suspicious line
Ag_loss.backward()
Boptimizer_d.step()
Coptimizer_g.zero_grad()
Dcriterion(d_fake, real_labels)
Attempts:
2 left
💡 Hint
Check which optimizer should update the generator weights.
🧠 Conceptual
expert
2:30remaining
Why alternate training of discriminator and generator is important in GANs
Why do GAN training loops alternate between updating the discriminator and the generator instead of updating both simultaneously?
ATo prevent one network from overpowering the other, maintaining a balance for stable training
BBecause simultaneous updates cause syntax errors in PyTorch
CTo reduce the total training time by skipping some updates
DBecause the generator requires more frequent updates than the discriminator
Attempts:
2 left
💡 Hint
Think about the adversarial nature of GANs and training stability.