0
0
PyTorchml~10 mins

Autoencoder architecture in PyTorch - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to define the encoder layer in a simple autoencoder.

PyTorch
import torch.nn as nn

class Autoencoder(nn.Module):
    def __init__(self):
        super(Autoencoder, self).__init__()
        self.encoder = nn.Sequential(
            nn.Linear(784, 128),
            nn.ReLU(),
            nn.Linear(128, [1]),
            nn.ReLU()
        )
Drag options to blanks, or click blank then click option'
A64
B256
C512
D1024
Attempts:
3 left
💡 Hint
Common Mistakes
Using a bottleneck size larger than the input size defeats compression.
Choosing a bottleneck size too large reduces the model's ability to learn compact features.
2fill in blank
medium

Complete the code to define the decoder layer in the autoencoder.

PyTorch
class Autoencoder(nn.Module):
    def __init__(self):
        super(Autoencoder, self).__init__()
        self.decoder = nn.Sequential(
            nn.Linear(64, 128),
            nn.ReLU(),
            nn.Linear(128, [1]),
            nn.Sigmoid()
        )
Drag options to blanks, or click blank then click option'
A512
B64
C784
D256
Attempts:
3 left
💡 Hint
Common Mistakes
Setting the output size smaller than the input causes incomplete reconstruction.
Using a non-matching output size causes shape errors during training.
3fill in blank
hard

Fix the error in the forward method of the autoencoder.

PyTorch
def forward(self, x):
    encoded = self.encoder(x)
    decoded = self.decoder([1])
    return decoded
Drag options to blanks, or click blank then click option'
Aself.decoder
Bx
Cself.encoder
Dencoded
Attempts:
3 left
💡 Hint
Common Mistakes
Passing the original input to the decoder instead of the encoded representation.
Passing the decoder or encoder functions themselves instead of data.
4fill in blank
hard

Fill both blanks to complete the training loop for the autoencoder.

PyTorch
for data in dataloader:
    inputs, _ = data
    optimizer.zero_grad()
    outputs = model([1])
    loss = criterion(outputs, [2])
    loss.backward()
    optimizer.step()
Drag options to blanks, or click blank then click option'
Ainputs
Boutputs
Cinputs.detach()
Dinputs.float()
Attempts:
3 left
💡 Hint
Common Mistakes
Using outputs as input to the model causes infinite loops.
Not converting inputs to float causes type errors in loss calculation.
5fill in blank
hard

Fill all three blanks to create a dictionary that stores the encoded representations for each input batch.

PyTorch
encoded_data = {}
for i, data in enumerate(dataloader):
    inputs, _ = data
    encoded = model.encoder([1])
    encoded_data[[2]] = [3]
Drag options to blanks, or click blank then click option'
Ainputs
Bi
Cencoded.detach()
Dinputs.float()
Attempts:
3 left
💡 Hint
Common Mistakes
Not detaching encoded tensors causes memory leaks.
Using inputs as dictionary keys causes errors.
Not converting inputs to float causes type errors.