Challenge - 5 Problems
VAE Mastery Badge
Get all challenges correct to earn this badge!
Test your skills under time pressure!
❓ Predict Output
intermediate2:00remaining
Output of VAE latent variable sampling code
What is the shape of the sampled latent variable
z after running this PyTorch code snippet?PyTorch
import torch import torch.nn.functional as F batch_size = 16 latent_dim = 10 mu = torch.zeros(batch_size, latent_dim) logvar = torch.zeros(batch_size, latent_dim) std = torch.exp(0.5 * logvar) eps = torch.randn_like(std) z = mu + eps * std print(z.shape)
Attempts:
2 left
💡 Hint
Remember that mu and logvar have shape (batch_size, latent_dim), and sampling keeps that shape.
✗ Incorrect
The sampled latent variable z has the same shape as mu and logvar, which is (batch_size, latent_dim). Here, batch_size=16 and latent_dim=10, so z.shape is (16, 10).
❓ Model Choice
intermediate2:00remaining
Choosing the correct VAE loss function components
Which of the following correctly describes the two main components of the Variational Autoencoder loss function?
Attempts:
2 left
💡 Hint
Think about what VAE tries to reconstruct and what distribution it tries to regularize.
✗ Incorrect
The VAE loss has two parts: a reconstruction loss that measures how well the decoder recreates the input, and a KL divergence term that regularizes the latent distribution to be close to a prior (usually standard normal).
❓ Hyperparameter
advanced2:00remaining
Effect of latent dimension size in VAE
What is the most likely effect of increasing the latent dimension size in a Variational Autoencoder model?
Attempts:
2 left
💡 Hint
Think about how latent space size affects model capacity and regularization.
✗ Incorrect
Increasing latent dimension allows the model to represent more complex features, but it may overfit and the KL divergence regularization becomes weaker, reducing the model's ability to generalize.
🔧 Debug
advanced2:00remaining
Identifying error in VAE reparameterization code
What error will this PyTorch code raise when running the reparameterization step in a VAE?
PyTorch
def reparameterize(mu, logvar): std = torch.exp(0.5 * logvar) eps = torch.randn_like(std) return mu + eps * std mu = torch.zeros(4, 5) logvar = torch.zeros(4, 5) z = reparameterize(mu, logvar)
Attempts:
2 left
💡 Hint
Check the formula for standard deviation from log variance in VAE reparameterization.
✗ Incorrect
No runtime error is raised: all tensors have compatible shapes (4, 5), torch.exp works on tensors, and std is non-negative. The code runs correctly (B). Note that for proper VAE reparameterization, use std = torch.exp(0.5 * logvar).
❓ Metrics
expert2:00remaining
Interpreting VAE training metrics
During VAE training, you observe the reconstruction loss steadily decreases but the KL divergence term remains near zero. What is the most likely explanation?
Attempts:
2 left
💡 Hint
Think about what a near-zero KL divergence means for latent variable distribution.
✗ Incorrect
A near-zero KL divergence means the approximate posterior matches the prior exactly, so the latent variables carry no information about the input. This means the decoder ignores the latent space and the model acts like a standard autoencoder.