Complete the code to define the encoder layer in a Variational Autoencoder.
self.fc1 = nn.Linear(input_dim, [1])The encoder first maps the input to a hidden dimension before producing latent variables.
Complete the code to sample latent variable z using reparameterization trick.
z = mu + [1] * stdThe reparameterization trick samples from a normal distribution with mean mu and std deviation std by adding noise from a standard normal distribution.
Fix the error in the KL divergence calculation between the latent distribution and standard normal.
kl_divergence = -0.5 * torch.sum(1 + logvar - [1] - mu.pow(2))
The KL divergence formula requires the variance, which is the exponential of log variance.
Fill in the blank to complete the decoder forward pass that reconstructs the input.
x = F.relu(self.fc3(z)) reconstruction = torch.sigmoid(self.fc4([1])) return reconstruction, mu, logvar
The decoder applies a ReLU activation to the hidden layer x, then uses x as input to the final layer for reconstruction.
Fill all three blanks to complete the loss function combining reconstruction loss and KL divergence.
reconstruction_loss = F.binary_cross_entropy(recon_x, [1], reduction='sum') kl_divergence = -0.5 * torch.sum(1 + [2] - [3] - mu.pow(2)) return reconstruction_loss + kl_divergence
The reconstruction loss compares the reconstructed output with the original input x. The KL divergence uses logvar and its exponential to compute variance.