Imagine you have 100 pictures of cats. You flip each picture horizontally to create new images. How does this affect the number of training images?
Think about how many images you have before and after flipping.
Each original image creates one new image by flipping, so total images become original + flipped = 2 times original.
If you rotate each image in your dataset by 90, 180, and 270 degrees, how many images will you have compared to the original?
Count the original plus all rotated versions.
Each image generates three new rotated images plus the original, so total images are 4 times the original.
What is the output of this code that calculates augmented dataset size?
original_size = 150 augmentations_per_image = 5 augmented_size = original_size * (augmentations_per_image + 1) print(augmented_size)
Remember to add the original images to the augmented ones.
The total dataset size is original images plus 5 augmentations each, so 150 * (5 + 1) = 900.
Which statement best explains why data augmentation can improve training accuracy?
Think about how variety in data affects learning.
Augmentation creates diverse examples, helping the model avoid overfitting and improve accuracy.
Given this code snippet, why does the augmented dataset have fewer images than expected?
images = [img1, img2, img3] augmented_images = [] for img in images: augmented_images.append(img) augmented_images.append(flip(img)) augmented_images.append(rotate(img)) print(len(augmented_images))
Check what the rotate function returns.
If rotate returns None, appending it adds None instead of an image, so the count of valid images is less.