Recall & Review
beginner
What is the MixUp strategy in machine learning?
MixUp is a data augmentation technique where two images and their labels are combined by taking a weighted average. This helps the model learn smoother decision boundaries and improves generalization.
Click to reveal answer
beginner
How does MixUp create new training samples?
MixUp creates new samples by mixing two images: new_image = λ * image1 + (1 - λ) * image2, and similarly mixes their labels: new_label = λ * label1 + (1 - λ) * label2, where λ is a value between 0 and 1.
Click to reveal answer
intermediate
Why does MixUp improve model robustness?
By training on mixed images and labels, the model learns to predict soft labels and becomes less sensitive to noise or small changes, leading to better robustness and less overfitting.
Click to reveal answer
intermediate
What role does the parameter λ play in MixUp?
λ controls the mixing ratio between two images and their labels. It is usually sampled from a Beta distribution, allowing random but controlled blending of samples.
Click to reveal answer
beginner
Show a simple Python code snippet to apply MixUp on two images and labels.
import numpy as np
def mixup(image1, label1, image2, label2, alpha=0.4):
lam = np.random.beta(alpha, alpha)
mixed_image = lam * image1 + (1 - lam) * image2
mixed_label = lam * label1 + (1 - lam) * label2
return mixed_image, mixed_labelClick to reveal answer
What does MixUp combine to create new training samples?
✗ Incorrect
MixUp combines two images and their labels by weighted averaging to create new training samples.
Which distribution is commonly used to sample the mixing parameter λ in MixUp?
✗ Incorrect
The Beta distribution is used to sample λ because it produces values between 0 and 1 with flexible shapes.
What is a key benefit of using MixUp during training?
✗ Incorrect
MixUp helps the model generalize better and become more robust by training on mixed samples.
In MixUp, what happens to the labels when two images are mixed?
✗ Incorrect
Labels are mixed using the same weighted average as the images to create soft labels.
Which of the following is NOT true about MixUp?
✗ Incorrect
MixUp does not require changing the model architecture; it is a data augmentation technique.
Explain how the MixUp strategy works and why it helps improve model performance.
Think about how combining two samples can teach the model smoother decision boundaries.
You got /5 concepts.
Describe the role of the parameter λ in MixUp and how it is chosen.
Consider how λ decides how much of each image and label is used.
You got /4 concepts.