0
0
PyTorchml~20 mins

Data augmentation in PyTorch - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Data Augmentation Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
Output of image rotation augmentation
Given the following PyTorch code applying a rotation augmentation to a 3x3 grayscale image tensor, what is the output tensor after the transformation?
PyTorch
import torch
import torchvision.transforms as T

image = torch.tensor([[1, 2, 3],
                      [4, 5, 6],
                      [7, 8, 9]], dtype=torch.float32).unsqueeze(0).unsqueeze(0)

transform = T.RandomRotation(degrees=(90, 90))

rotated_image = transform(image)

print(rotated_image.squeeze().int())
A
[[1 4 7]
 [2 5 8]
 [3 6 9]]
B
[[7 4 1]
 [8 5 2]
 [9 6 3]]
C
[[9 8 7]
 [6 5 4]
 [3 2 1]]
D
[[3 6 9]
 [2 5 8]
 [1 4 7]]
Attempts:
2 left
💡 Hint
Think about how a 90 degree rotation affects the position of pixels in a matrix.
Model Choice
intermediate
1:30remaining
Best augmentation for small dataset with overfitting
You have a small image dataset and your model is overfitting. Which data augmentation technique is most effective to reduce overfitting?
AReducing learning rate without augmentation
BAdding Gaussian noise to labels
CIncreasing batch size without augmentation
DRandom horizontal flip and random crop
Attempts:
2 left
💡 Hint
Think about augmentations that increase data diversity realistically.
Hyperparameter
advanced
2:00remaining
Choosing rotation degree range for augmentation
You want to augment images by rotating them randomly. Which rotation degree range is most appropriate to preserve the original image semantics for handwritten digit recognition?
ARotate between -10 and 10 degrees
BRotate between -180 and 180 degrees
CRotate between 90 and 270 degrees
DRotate between 45 and 135 degrees
Attempts:
2 left
💡 Hint
Consider how much rotation changes the digit's appearance.
Metrics
advanced
1:30remaining
Effect of augmentation on validation accuracy
You train two identical models on the same dataset. Model A uses data augmentation; Model B does not. After training, Model A has 85% validation accuracy, Model B has 80%. What does this difference most likely indicate?
AAugmentation caused overfitting by adding noise
BModel B trained longer, so it performed worse
CAugmentation improved generalization by increasing data diversity
DValidation data was augmented, causing bias
Attempts:
2 left
💡 Hint
Think about how augmentation affects model learning on unseen data.
🔧 Debug
expert
2:30remaining
Debugging incorrect augmentation pipeline
You apply this PyTorch augmentation pipeline but the output images are always identical to the input. What is the bug? Code: import torchvision.transforms as T transform = T.Compose([ T.RandomHorizontalFlip(p=0), T.RandomRotation(degrees=0), T.ColorJitter(brightness=0) ]) augmented_image = transform(input_image)
AAll augmentation parameters are set to zero probability or zero effect, so no change occurs
BThe Compose function is missing a call to .to_tensor(), so no augmentation happens
CRandomHorizontalFlip requires p=1 to flip images always, p=0 disables it
DColorJitter requires at least one non-zero parameter to change brightness
Attempts:
2 left
💡 Hint
Check the parameters controlling augmentation strength.