0
0
PyTorchml~20 mins

Variational Autoencoder in PyTorch - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
VAE Mastery Badge
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
Output of VAE latent variable sampling code
What is the shape of the sampled latent variable z after running this PyTorch code snippet?
PyTorch
import torch
import torch.nn.functional as F

batch_size = 16
latent_dim = 10
mu = torch.zeros(batch_size, latent_dim)
logvar = torch.zeros(batch_size, latent_dim)
std = torch.exp(0.5 * logvar)
eps = torch.randn_like(std)
z = mu + eps * std
print(z.shape)
Atorch.Size([16, 10])
Btorch.Size([10, 16])
Ctorch.Size([10])
Dtorch.Size([16])
Attempts:
2 left
💡 Hint
Remember that mu and logvar have shape (batch_size, latent_dim), and sampling keeps that shape.
Model Choice
intermediate
2:00remaining
Choosing the correct VAE loss function components
Which of the following correctly describes the two main components of the Variational Autoencoder loss function?
ACross-entropy loss between input and output, plus L2 regularization on weights
BMean squared error between latent variables, plus entropy of output distribution
CReconstruction loss measuring output-input difference, plus KL divergence between approximate posterior and prior
DHinge loss on output labels, plus KL divergence between input and output
Attempts:
2 left
💡 Hint
Think about what VAE tries to reconstruct and what distribution it tries to regularize.
Hyperparameter
advanced
2:00remaining
Effect of latent dimension size in VAE
What is the most likely effect of increasing the latent dimension size in a Variational Autoencoder model?
AThe KL divergence term becomes zero regardless of data
BThe model will always perform worse due to increased computational cost
CThe reconstruction loss will increase because latent space is too large
DThe model can capture more complex data features but risks overfitting and less regularization
Attempts:
2 left
💡 Hint
Think about how latent space size affects model capacity and regularization.
🔧 Debug
advanced
2:00remaining
Identifying error in VAE reparameterization code
What error will this PyTorch code raise when running the reparameterization step in a VAE?
PyTorch
def reparameterize(mu, logvar):
    std = torch.exp(0.5 * logvar)
    eps = torch.randn_like(std)
    return mu + eps * std

mu = torch.zeros(4, 5)
logvar = torch.zeros(4, 5)
z = reparameterize(mu, logvar)
ANo error, code runs correctly
BRuntimeError due to shape mismatch in multiplication
CTypeError because torch.exp cannot take logvar tensor
DValueError because std is negative
Attempts:
2 left
💡 Hint
Check the formula for standard deviation from log variance in VAE reparameterization.
Metrics
expert
2:00remaining
Interpreting VAE training metrics
During VAE training, you observe the reconstruction loss steadily decreases but the KL divergence term remains near zero. What is the most likely explanation?
AThe model is perfectly regularized and has optimal latent distribution
BThe model ignores the latent space and behaves like a standard autoencoder
CThe KL divergence term is not implemented correctly and is always zero
DThe reconstruction loss is not computed properly and is misleading
Attempts:
2 left
💡 Hint
Think about what a near-zero KL divergence means for latent variable distribution.