0
0
PyTorchml~10 mins

Variational Autoencoder in PyTorch - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to define the encoder layer in a Variational Autoencoder.

PyTorch
self.fc1 = nn.Linear(input_dim, [1])
Drag options to blanks, or click blank then click option'
Ahidden_dim
Blatent_dim
Coutput_dim
Dbatch_size
Attempts:
3 left
💡 Hint
Common Mistakes
Using latent_dim directly instead of hidden_dim.
Confusing output_dim with latent_dim.
2fill in blank
medium

Complete the code to sample latent variable z using reparameterization trick.

PyTorch
z = mu + [1] * std
Drag options to blanks, or click blank then click option'
Atorch.rand_like(mu)
Btorch.zeros_like(mu)
Ctorch.ones_like(mu)
Dtorch.randn_like(mu)
Attempts:
3 left
💡 Hint
Common Mistakes
Using zeros or ones instead of random noise.
Using uniform random noise instead of normal.
3fill in blank
hard

Fix the error in the KL divergence calculation between the latent distribution and standard normal.

PyTorch
kl_divergence = -0.5 * torch.sum(1 + logvar - [1] - mu.pow(2))
Drag options to blanks, or click blank then click option'
Alogvar
Blogvar.exp()
Cmu
Dlogvar.log()
Attempts:
3 left
💡 Hint
Common Mistakes
Using logvar directly instead of its exponential.
Using log of logvar which is incorrect.
4fill in blank
hard

Fill in the blank to complete the decoder forward pass that reconstructs the input.

PyTorch
x = F.relu(self.fc3(z))
reconstruction = torch.sigmoid(self.fc4([1]))
return reconstruction, mu, logvar
Drag options to blanks, or click blank then click option'
Alogvar
Bz
Cx
Dmu
Attempts:
3 left
💡 Hint
Common Mistakes
Passing mu or logvar instead of the hidden layer x.
Using z in place of the hidden layer before final output.
5fill in blank
hard

Fill all three blanks to complete the loss function combining reconstruction loss and KL divergence.

PyTorch
reconstruction_loss = F.binary_cross_entropy(recon_x, [1], reduction='sum')
kl_divergence = -0.5 * torch.sum(1 + [2] - [3] - mu.pow(2))
return reconstruction_loss + kl_divergence
Drag options to blanks, or click blank then click option'
Ax
Blogvar
Clogvar.exp()
Drecon_x
Attempts:
3 left
💡 Hint
Common Mistakes
Mixing up recon_x and x in reconstruction loss.
Using logvar directly instead of its exponential in KL divergence.