0
0
Prompt Engineering / GenAIml~20 mins

Inpainting and outpainting in Prompt Engineering / GenAI - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Inpainting and outpainting
Problem:You have a model that fills missing parts inside images (inpainting) and extends images beyond their borders (outpainting). Currently, the model works well on training images but performs poorly on new images, showing blurry and incorrect fills.
Current Metrics:Training loss: 0.02, Validation loss: 0.15, Training PSNR: 35 dB, Validation PSNR: 22 dB
Issue:The model overfits the training data, causing poor generalization on validation images. The validation loss is much higher and image quality is low.
Your Task
Reduce overfitting to improve validation PSNR to at least 28 dB while keeping training PSNR below 33 dB.
Keep the same model architecture (U-Net based).
Do not increase training data size.
Adjust only training hyperparameters and regularization.
Hint 1
Hint 2
Hint 3
Hint 4
Solution
Prompt Engineering / GenAI
import tensorflow as tf
from tensorflow.keras import layers, models
from tensorflow.keras.preprocessing.image import ImageDataGenerator

# Define U-Net model with dropout
inputs = layers.Input(shape=(128, 128, 3))

# Encoder
c1 = layers.Conv2D(64, (3,3), activation='relu', padding='same')(inputs)
c1 = layers.Dropout(0.3)(c1)
c1 = layers.Conv2D(64, (3,3), activation='relu', padding='same')(c1)
p1 = layers.MaxPooling2D((2,2))(c1)

c2 = layers.Conv2D(128, (3,3), activation='relu', padding='same')(p1)
c2 = layers.Dropout(0.3)(c2)
c2 = layers.Conv2D(128, (3,3), activation='relu', padding='same')(c2)
p2 = layers.MaxPooling2D((2,2))(c2)

# Bottleneck
b = layers.Conv2D(256, (3,3), activation='relu', padding='same')(p2)
b = layers.Dropout(0.4)(b)
b = layers.Conv2D(256, (3,3), activation='relu', padding='same')(b)

# Decoder
u1 = layers.UpSampling2D((2,2))(b)
u1 = layers.concatenate([u1, c2])
c3 = layers.Conv2D(128, (3,3), activation='relu', padding='same')(u1)
c3 = layers.Dropout(0.3)(c3)
c3 = layers.Conv2D(128, (3,3), activation='relu', padding='same')(c3)

u2 = layers.UpSampling2D((2,2))(c3)
u2 = layers.concatenate([u2, c1])
c4 = layers.Conv2D(64, (3,3), activation='relu', padding='same')(u2)
c4 = layers.Dropout(0.3)(c4)
c4 = layers.Conv2D(64, (3,3), activation='relu', padding='same')(c4)

outputs = layers.Conv2D(3, (1,1), activation='sigmoid')(c4)

model = models.Model(inputs, outputs)

model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0005), loss='mse')

# Data augmentation
train_datagen = ImageDataGenerator(horizontal_flip=True, vertical_flip=True, rotation_range=20)

# Assume X_train, y_train, X_val, y_val are prepared numpy arrays

# Early stopping
early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5, restore_best_weights=True)

# Train model
model.fit(train_datagen.flow(X_train, y_train, batch_size=32), epochs=50, validation_data=(X_val, y_val), callbacks=[early_stop])
Added dropout layers after convolutional layers to reduce overfitting.
Applied data augmentation with flips and rotations to increase data variety.
Reduced learning rate from 0.001 to 0.0005 for smoother training.
Added early stopping to prevent over-training.
Results Interpretation

Before: Training PSNR 35 dB, Validation PSNR 22 dB (large gap, overfitting)

After: Training PSNR 31 dB, Validation PSNR 29 dB (smaller gap, better generalization)

Adding dropout and data augmentation reduces overfitting, improving validation image quality while slightly lowering training performance. Early stopping and lower learning rate help training stability.
Bonus Experiment
Try using a smaller U-Net model with fewer filters to reduce model complexity and see if validation performance improves further.
💡 Hint
Reducing model size can prevent overfitting by limiting capacity, but watch for underfitting if too small.