Recall & Review
beginner
What is an autoencoder in simple terms?
An autoencoder is a type of neural network that learns to copy its input to its output. It does this by first compressing the input into a smaller representation, then reconstructing the original input from that compressed form.
Click to reveal answer
beginner
What are the two main parts of an autoencoder?
The two main parts are the encoder and the decoder. The encoder compresses the input into a smaller code, and the decoder tries to rebuild the original input from that code.
Click to reveal answer
intermediate
Why do autoencoders learn a compressed representation of data?
Because the middle layer (called the bottleneck) has fewer neurons than the input, the network must learn the most important features to represent the data efficiently.
Click to reveal answer
beginner
What loss function is commonly used to train an autoencoder?
Mean Squared Error (MSE) loss is commonly used because it measures how close the reconstructed output is to the original input.
Click to reveal answer
intermediate
Show a simple PyTorch autoencoder architecture code snippet.
import torch
import torch.nn as nn
class Autoencoder(nn.Module):
def __init__(self):
super().__init__()
self.encoder = nn.Sequential(
nn.Linear(784, 128),
nn.ReLU(),
nn.Linear(128, 64),
nn.ReLU(),
nn.Linear(64, 12),
nn.ReLU(),
nn.Linear(12, 3) # bottleneck
)
self.decoder = nn.Sequential(
nn.Linear(3, 12),
nn.ReLU(),
nn.Linear(12, 64),
nn.ReLU(),
nn.Linear(64, 128),
nn.ReLU(),
nn.Linear(128, 784),
nn.Sigmoid() # output between 0 and 1
)
def forward(self, x):
encoded = self.encoder(x)
decoded = self.decoder(encoded)
return decodedClick to reveal answer
What is the main goal of an autoencoder?
✗ Incorrect
Autoencoders learn to compress and then reconstruct the input data, so their main goal is reconstruction.
Which part of the autoencoder compresses the input data?
✗ Incorrect
The encoder compresses the input into a smaller representation.
What is the 'bottleneck' in an autoencoder?
✗ Incorrect
The bottleneck is the smallest layer that holds the compressed representation.
Which loss function is commonly used to train autoencoders?
✗ Incorrect
MSE loss measures the difference between the input and reconstructed output.
In PyTorch, which activation function is often used at the output layer of an autoencoder for normalized data?
✗ Incorrect
Sigmoid outputs values between 0 and 1, suitable for normalized data reconstruction.
Explain the structure of a simple autoencoder and how it processes data.
Think about how data flows from input to output through compression and reconstruction.
You got /4 concepts.
Describe why autoencoders are useful for learning data representations.
Consider what happens when the network must represent data in fewer dimensions.
You got /4 concepts.