Imagine you have only 100 images to train a computer vision model. Why is data augmentation helpful in this case?
Think about how to get more training examples without collecting new images.
Data augmentation creates new images by rotating, flipping, or changing brightness. This helps the model see more variety and learn better from limited data.
What will be the shape of the output image after applying horizontal flip augmentation using this code?
import numpy as np image = np.random.rand(64, 64, 3) flipped_image = np.flip(image, axis=1) print(flipped_image.shape)
Flipping changes pixel order but not image dimensions.
The flip operation reverses pixels along the width (axis=1) but keeps the shape the same.
You have 50 images. You apply 3 augmentation techniques: rotation, horizontal flip, and brightness change. Each technique creates one new image per original image. How many images will you have after augmentation?
Count original images plus all augmented images.
Each of the 3 augmentations creates 50 new images, so total images = 50 original + 3*50 = 200.
What error will this code produce when trying to fine-tune a pretrained model on a small dataset?
from tensorflow.keras.applications import MobileNetV2 from tensorflow.keras.layers import Dense, GlobalAveragePooling2D from tensorflow.keras.models import Model base_model = MobileNetV2(weights='imagenet', include_top=False, input_shape=(224,224,3)) for layer in base_model.layers: layer.trainable = False x = base_model.output x = GlobalAveragePooling2D()(x) x = Dense(10, activation='softmax')(x) model = Model(inputs=base_model.input, outputs=x) model.compile(optimizer='adam', loss='categorical_crossentropy')
Check the shape of the output from base_model before Dense layer.
The base_model output is a 4D tensor (batch, height, width, channels). Dense layers expect 2D input, so flattening or global pooling is needed before Dense.
You have only 30 labeled images for a classification task. You want to improve your model's accuracy. Which strategy is most effective?
Think about leveraging existing knowledge from large datasets.
Transfer learning uses knowledge from large datasets to help learning on small datasets. Freezing most layers prevents overfitting and fine-tuning adapts the model to your data.