Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to define a simple generator class in PyTorch.
PyTorch
import torch.nn as nn class Generator(nn.Module): def __init__(self): super(Generator, self).__init__() self.main = nn.Sequential( nn.Linear(100, 256), nn.ReLU(True), nn.Linear(256, 784), nn.Tanh() ) def forward(self, input): return self.main([1])
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'self' instead of 'input' inside the forward method.
Using a variable not defined in the method like 'x' or 'output'.
✗ Incorrect
The forward method takes the input tensor and passes it through the model layers. So, the input variable should be used.
2fill in blank
mediumComplete the code to define a simple discriminator class in PyTorch.
PyTorch
import torch.nn as nn class Discriminator(nn.Module): def __init__(self): super(Discriminator, self).__init__() self.main = nn.Sequential( nn.Linear(784, 256), nn.LeakyReLU(0.2, inplace=True), nn.Linear(256, 1), nn.Sigmoid() ) def forward(self, input): return self.main([1])
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'self' or undefined variables inside the forward method.
Confusing the input variable name.
✗ Incorrect
The forward method receives the input tensor and passes it through the model layers. So, 'input' should be used.
3fill in blank
hardFix the error in the training loop where the generator's output is passed to the discriminator.
PyTorch
for epoch in range(num_epochs): for real_data in dataloader: noise = torch.randn(batch_size, 100) fake_data = generator([1]) output = discriminator(fake_data) # rest of training code...
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Passing real_data instead of noise to the generator.
Passing the generator object itself.
✗ Incorrect
The generator expects noise as input to produce fake data. Passing 'noise' is correct.
4fill in blank
hardFill both blanks to complete the code for calculating the discriminator loss using binary cross entropy.
PyTorch
criterion = nn.BCELoss() real_labels = torch.ones(batch_size, 1) fake_labels = torch.zeros(batch_size, 1) output_real = discriminator(real_data) loss_real = criterion(output_real, [1]) output_fake = discriminator(fake_data.detach()) loss_fake = criterion(output_fake, [2]) d_loss = loss_real + loss_fake
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Swapping real_labels and fake_labels.
Using output tensors as labels.
✗ Incorrect
For real data, the target labels are ones (real_labels). For fake data, the target labels are zeros (fake_labels).
5fill in blank
hardFill all three blanks to complete the generator training step with loss calculation and backpropagation.
PyTorch
optimizer_g.zero_grad() noise = torch.randn(batch_size, 100) fake_data = generator([1]) output = discriminator(fake_data) loss_g = criterion(output, [2]) loss_g.[3]() optimizer_g.step()
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using fake_labels instead of real_labels for generator loss.
Forgetting to call backward() before optimizer step.
Passing wrong input to generator.
✗ Incorrect
The generator input is noise. The generator tries to fool the discriminator, so target labels are real_labels (ones). Then call backward() to compute gradients.