Imagine you have a small set of photos to teach a computer to recognize cats. Why is it helpful to use data augmentation?
Think about how changing images slightly can help the model see more types of cats.
Data augmentation creates new training images by applying transformations like rotation or flipping. This helps the model learn to recognize objects in different conditions, improving its ability to generalize.
Given this code snippet using TensorFlow's image augmentation, what is the shape of the output image?
import tensorflow as tf image = tf.random.uniform(shape=(100, 100, 3)) augmented = tf.image.random_flip_left_right(image) print(augmented.shape)
Flipping an image horizontally does not change its shape.
The random_flip_left_right function flips the image horizontally but keeps the original shape (height, width, channels).
You want to test how data augmentation improves model performance on a small image dataset. Which model choice is best to clearly see the effect?
Think about a model that can learn from images and show clear differences when data changes.
A simple CNN trained from scratch on a small dataset will benefit noticeably from augmentation, showing clearer performance differences.
When applying data augmentation to a small image dataset, which approach to augmentation intensity usually helps the model most?
Think about balancing new examples and keeping images recognizable.
Moderate augmentation helps the model see varied but realistic images, improving generalization without confusing it with unrealistic data.
You trained two image classifiers: one with data augmentation and one without. After training, the augmented model has higher training loss but better validation accuracy. What does this indicate?
Think about what it means when training is harder but validation improves.
Data augmentation makes training data more varied and challenging, increasing training loss but helping the model generalize better, shown by improved validation accuracy.