0
0
Computer Visionml~5 mins

MixUp strategy in Computer Vision - Cheat Sheet & Quick Revision

Choose your learning style9 modes available
Recall & Review
beginner
What is the MixUp strategy in machine learning?
MixUp is a data augmentation technique where two images and their labels are combined by taking a weighted average. This helps the model learn smoother decision boundaries and improves generalization.
Click to reveal answer
beginner
How does MixUp create new training samples?
MixUp creates new samples by mixing two images: new_image = λ * image1 + (1 - λ) * image2, and similarly mixes their labels: new_label = λ * label1 + (1 - λ) * label2, where λ is a value between 0 and 1.
Click to reveal answer
intermediate
Why does MixUp improve model robustness?
By training on mixed images and labels, the model learns to predict soft labels and becomes less sensitive to noise or small changes, leading to better robustness and less overfitting.
Click to reveal answer
intermediate
What role does the parameter λ play in MixUp?
λ controls the mixing ratio between two images and their labels. It is usually sampled from a Beta distribution, allowing random but controlled blending of samples.
Click to reveal answer
beginner
Show a simple Python code snippet to apply MixUp on two images and labels.
import numpy as np

def mixup(image1, label1, image2, label2, alpha=0.4):
    lam = np.random.beta(alpha, alpha)
    mixed_image = lam * image1 + (1 - lam) * image2
    mixed_label = lam * label1 + (1 - lam) * label2
    return mixed_image, mixed_label
Click to reveal answer
What does MixUp combine to create new training samples?
AOnly two images by stacking them
BTwo images and their labels using weighted average
CTwo labels without changing images
DRandom noise added to images
Which distribution is commonly used to sample the mixing parameter λ in MixUp?
ABeta distribution
BUniform distribution
CNormal distribution
DPoisson distribution
What is a key benefit of using MixUp during training?
AReduced model size
BFaster training time
CImproved model robustness and generalization
DSimpler model architecture
In MixUp, what happens to the labels when two images are mixed?
ALabels are mixed using the same weights as images
BLabels are concatenated
CLabels are replaced by zeros
DLabels are ignored
Which of the following is NOT true about MixUp?
AIt creates new training samples by mixing images
BIt helps reduce overfitting
CIt mixes labels to create soft targets
DIt requires changing the model architecture
Explain how the MixUp strategy works and why it helps improve model performance.
Think about how combining two samples can teach the model smoother decision boundaries.
You got /5 concepts.
    Describe the role of the parameter λ in MixUp and how it is chosen.
    Consider how λ decides how much of each image and label is used.
    You got /4 concepts.