Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to define the encoder layer in a simple autoencoder.
PyTorch
import torch.nn as nn class Autoencoder(nn.Module): def __init__(self): super(Autoencoder, self).__init__() self.encoder = nn.Sequential( nn.Linear(784, 128), nn.ReLU(), nn.Linear(128, [1]), nn.ReLU() )
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using a bottleneck size larger than the input size defeats compression.
Choosing a bottleneck size too large reduces the model's ability to learn compact features.
✗ Incorrect
The encoder compresses the input from 784 to 64 features before decoding. 64 is a common bottleneck size for simple autoencoders.
2fill in blank
mediumComplete the code to define the decoder layer in the autoencoder.
PyTorch
class Autoencoder(nn.Module): def __init__(self): super(Autoencoder, self).__init__() self.decoder = nn.Sequential( nn.Linear(64, 128), nn.ReLU(), nn.Linear(128, [1]), nn.Sigmoid() )
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Setting the output size smaller than the input causes incomplete reconstruction.
Using a non-matching output size causes shape errors during training.
✗ Incorrect
The decoder reconstructs the original input size of 784 from the bottleneck size.
3fill in blank
hardFix the error in the forward method of the autoencoder.
PyTorch
def forward(self, x): encoded = self.encoder(x) decoded = self.decoder([1]) return decoded
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Passing the original input to the decoder instead of the encoded representation.
Passing the decoder or encoder functions themselves instead of data.
✗ Incorrect
The decoder should take the encoded output as input to reconstruct the original data.
4fill in blank
hardFill both blanks to complete the training loop for the autoencoder.
PyTorch
for data in dataloader: inputs, _ = data optimizer.zero_grad() outputs = model([1]) loss = criterion(outputs, [2]) loss.backward() optimizer.step()
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using outputs as input to the model causes infinite loops.
Not converting inputs to float causes type errors in loss calculation.
✗ Incorrect
The model input is the raw inputs, and the loss compares outputs to the original inputs converted to float for proper calculation.
5fill in blank
hardFill all three blanks to create a dictionary that stores the encoded representations for each input batch.
PyTorch
encoded_data = {}
for i, data in enumerate(dataloader):
inputs, _ = data
encoded = model.encoder([1])
encoded_data[[2]] = [3] Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Not detaching encoded tensors causes memory leaks.
Using inputs as dictionary keys causes errors.
Not converting inputs to float causes type errors.
✗ Incorrect
Inputs are converted to float before encoding, the batch index i is used as the dictionary key, and the encoded tensor is detached from the computation graph before storing.