0
0
PyTorchml~20 mins

Autoencoder architecture in PyTorch - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Autoencoder Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Model Choice
intermediate
2:00remaining
Choose the correct autoencoder architecture
Which of the following PyTorch model architectures correctly implements a simple autoencoder with one hidden layer in the encoder and decoder?
A
class Autoencoder(nn.Module):
    def __init__(self):
        super().__init__()
        self.encoder = nn.Sequential(
            nn.Linear(784, 128),
            nn.ReLU()
        )
        self.decoder = nn.Sequential(
            nn.Linear(128, 784),
            nn.Sigmoid()
        )
    def forward(self, x):
        x = self.encoder(x)
        x = self.decoder(x)
        return x
B
class Autoencoder(nn.Module):
    def __init__(self):
        super().__init__()
        self.encoder = nn.Sequential(
            nn.Linear(784, 128),
            nn.Sigmoid()
        )
        self.decoder = nn.Sequential(
            nn.Linear(128, 784),
            nn.ReLU()
        )
    def forward(self, x):
        x = self.encoder(x)
        x = self.decoder(x)
        return x
C
class Autoencoder(nn.Module):
    def __init__(self):
        super().__init__()
        self.encoder = nn.Sequential(
            nn.Linear(784, 128),
            nn.Tanh()
        )
        self.decoder = nn.Sequential(
            nn.Linear(128, 784),
            nn.Tanh()
        )
    def forward(self, x):
        x = self.encoder(x)
        x = self.decoder(x)
        return x
D
class Autoencoder(nn.Module):
    def __init__(self):
        super().__init__()
        self.encoder = nn.Sequential(
            nn.Linear(784, 128),
            nn.ReLU()
        )
        self.decoder = nn.Sequential(
            nn.Linear(128, 784),
            nn.ReLU()
        )
    def forward(self, x):
        x = self.encoder(x)
        x = self.decoder(x)
        return x
Attempts:
2 left
💡 Hint
Remember the decoder output activation should be sigmoid for normalized input images.
Predict Output
intermediate
1:30remaining
Output shape of encoded representation
Given the following autoencoder model, what is the shape of the encoded output for an input batch of shape (64, 784)?
PyTorch
class Autoencoder(nn.Module):
    def __init__(self):
        super().__init__()
        self.encoder = nn.Sequential(
            nn.Linear(784, 256),
            nn.ReLU(),
            nn.Linear(256, 64),
            nn.ReLU()
        )
        self.decoder = nn.Sequential(
            nn.Linear(64, 256),
            nn.ReLU(),
            nn.Linear(256, 784),
            nn.Sigmoid()
        )
    def forward(self, x):
        encoded = self.encoder(x)
        decoded = self.decoder(encoded)
        return decoded

model = Autoencoder()
input_batch = torch.randn(64, 784)
encoded_output = model.encoder(input_batch)
encoded_output.shape
A(256, 64)
B(64, 256)
C(64, 64)
D(784, 64)
Attempts:
2 left
💡 Hint
Look at the last Linear layer in the encoder.
Hyperparameter
advanced
1:30remaining
Choosing the latent space size
In an autoencoder, what is the main effect of increasing the size of the latent space (the encoded representation dimension)?
AIt always improves compression and reduces reconstruction error.
BIt increases the model's capacity to reconstruct input but may reduce compression and generalization.
CIt decreases the model's capacity and leads to underfitting.
DIt has no effect on model performance or compression.
Attempts:
2 left
💡 Hint
Think about the trade-off between compression and reconstruction quality.
Metrics
advanced
1:30remaining
Choosing the loss function for image autoencoder
Which loss function is most appropriate for training an autoencoder on normalized grayscale images with pixel values between 0 and 1?
ACosine similarity loss
BCross Entropy loss
CHinge loss
DMean Squared Error (MSE) loss
Attempts:
2 left
💡 Hint
Consider the type of output and target values.
🔧 Debug
expert
2:30remaining
Identify the cause of training loss not decreasing
A user trains an autoencoder on MNIST but notices the training loss does not decrease at all. The model code is below. What is the most likely cause?
PyTorch
class Autoencoder(nn.Module):
    def __init__(self):
        super().__init__()
        self.encoder = nn.Sequential(
            nn.Linear(784, 128),
            nn.ReLU()
        )
        self.decoder = nn.Sequential(
            nn.Linear(128, 784),
            nn.ReLU()
        )
    def forward(self, x):
        x = self.encoder(x)
        x = self.decoder(x)
        return x

model = Autoencoder()
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)

for data, _ in dataloader:
    data = data.view(data.size(0), -1)
    optimizer.zero_grad()
    output = model(data)
    loss = criterion(output, data)
    loss.backward()
    optimizer.step()
ADecoder uses ReLU activation causing outputs to be >1, but input pixels are between 0 and 1, leading to poor loss optimization.
BThe optimizer learning rate is too high causing divergence.
CThe input data is not flattened before feeding to the model.
DThe loss function is inappropriate for reconstruction tasks.
Attempts:
2 left
💡 Hint
Check the decoder output activation and input data range.