Complete the code to load images from a directory using a common computer vision library.
from tensorflow.keras.preprocessing.image import ImageDataGenerator datagen = ImageDataGenerator(rescale=1./255) data = datagen.flow_from_directory('[1]', target_size=(150, 150), batch_size=32, class_mode='binary')
The directory path where training images are stored is usually specified. Here, 'dataset/train' is the correct folder path.
Complete the code to apply data augmentation to images to help with small datasets.
datagen = ImageDataGenerator(rescale=1./255, rotation_range=[1], horizontal_flip=True)
Rotation range of 40 degrees is a common augmentation to help models generalize better on small datasets.
Fix the error in the code to freeze the base model layers for transfer learning.
base_model = tf.keras.applications.MobileNetV2(input_shape=(224, 224, 3), include_top=False, weights='imagenet') for layer in base_model.[1]: layer.trainable = False
The base model's layers are accessed via the 'layers' attribute to freeze them.
Fill both blanks to create a dictionary comprehension that maps image filenames to their augmented versions.
augmented_images = {filename: next(datagen.[1](image[None], batch_size=1))[0] for filename, image in images.items() if image.shape [2] (224, 224, 3)}'flow' generates augmented images from the original image, and '==' checks if the image shape matches the target size.
Fill all three blanks to create a dictionary comprehension that filters images by size and applies augmentation.
filtered_augmented = {filename: next(datagen.[1](image[None], batch_size=1))[0] for filename, image in images.items() if image.[2][0] [3] 224}'flow' generates augmented images, 'shape' accesses the image dimensions, and '>=' filters images with width at least 224 pixels.