0
0
TensorFlowml~20 mins

Data augmentation as regularization in TensorFlow - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Data Augmentation Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
Effect of data augmentation on training accuracy
Consider the following TensorFlow code snippet that trains a simple CNN on the MNIST dataset with and without data augmentation. What will be the expected difference in training accuracy after 1 epoch?
TensorFlow
import tensorflow as tf
from tensorflow.keras import layers, models

(x_train, y_train), _ = tf.keras.datasets.mnist.load_data()
x_train = x_train[..., tf.newaxis] / 255.0

# Data augmentation layer
data_augmentation = tf.keras.Sequential([
    layers.RandomRotation(0.1),
    layers.RandomTranslation(0.1, 0.1)
])

# Model definition
model = models.Sequential([
    layers.Input(shape=(28, 28, 1)),
    data_augmentation,
    layers.Conv2D(16, 3, activation='relu'),
    layers.Flatten(),
    layers.Dense(10, activation='softmax')
])

model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
history = model.fit(x_train, y_train, epochs=1, batch_size=128, verbose=0)

train_acc = history.history['accuracy'][0]
ATraining accuracy will be lower than a model trained without augmentation after 1 epoch
BTraining accuracy will be higher than a model trained without augmentation after 1 epoch
CTraining accuracy will be exactly the same as a model trained without augmentation after 1 epoch
DTraining accuracy will be zero due to augmentation errors
Attempts:
2 left
💡 Hint
Think about how data augmentation changes the input images and affects model learning early in training.
🧠 Conceptual
intermediate
1:30remaining
Why is data augmentation considered a form of regularization?
Which of the following best explains why data augmentation acts as a regularizer in machine learning models?
AIt reduces the number of model parameters to prevent memorization
BIt directly modifies the loss function to penalize complex models
CIt stops training early to avoid overfitting
DIt increases the size and diversity of the training data, reducing overfitting by exposing the model to varied examples
Attempts:
2 left
💡 Hint
Think about how changing the input data affects the model's learning process.
Hyperparameter
advanced
1:30remaining
Choosing augmentation intensity for regularization
You want to use data augmentation as regularization for an image classification model. Which hyperparameter setting is most likely to cause underfitting due to excessive augmentation?
AApplying only horizontal flips
BApplying very strong random rotations (up to 90 degrees) and translations (up to 50% of image size)
CApplying no augmentation at all
DApplying mild random rotations (up to 5 degrees) and translations (up to 5% of image size)
Attempts:
2 left
💡 Hint
Consider how extreme transformations might affect the model's ability to learn meaningful patterns.
Metrics
advanced
1:30remaining
Impact of data augmentation on validation loss
After training a model with data augmentation, you observe the following: training loss is higher than without augmentation, but validation loss is lower. What does this indicate?
AData augmentation caused the model to underfit both training and validation data
BData augmentation caused the model to memorize training data
CData augmentation improved generalization by preventing overfitting
DData augmentation has no effect on model performance
Attempts:
2 left
💡 Hint
Think about the relationship between training loss, validation loss, and overfitting.
🔧 Debug
expert
2:30remaining
Debugging unexpected model behavior with data augmentation
You added a data augmentation layer to your TensorFlow model, but after training, the model's accuracy is stuck near random guessing. Which of the following is the most likely cause?
TensorFlow
data_augmentation = tf.keras.Sequential([
    layers.RandomRotation(1.5),  # 1.5 radians ~ 86 degrees
    layers.RandomZoom(0.5)
])

model = tf.keras.Sequential([
    layers.Input(shape=(28, 28, 1)),
    data_augmentation,
    layers.Conv2D(32, 3, activation='relu'),
    layers.Flatten(),
    layers.Dense(10, activation='softmax')
])
AThe RandomRotation angle is too large, causing images to be rotated almost 90 degrees, making them unrecognizable
BThe RandomZoom value is too small, causing images to be zoomed out too much
CThe data augmentation layer is missing a normalization step
DThe model architecture is too simple for MNIST
Attempts:
2 left
💡 Hint
Consider how extreme augmentation parameters affect input data quality.