0
0
Prompt Engineering / GenAIml~20 mins

Image understanding and description in Prompt Engineering / GenAI - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Image understanding and description
Problem:We want to build a model that looks at an image and writes a short sentence describing what it sees. Currently, the model is very good at describing training images but makes many mistakes on new images it has never seen.
Current Metrics:Training accuracy: 95%, Validation accuracy: 65%, Validation loss: 1.2
Issue:The model is overfitting. It performs very well on training images but poorly on validation images, showing it does not generalize well.
Your Task
Reduce overfitting so that validation accuracy improves to at least 80% while keeping training accuracy below 90%.
You cannot change the dataset or add more data.
You must keep the same model architecture type (CNN + RNN for image captioning).
Hint 1
Hint 2
Hint 3
Solution
Prompt Engineering / GenAI
import tensorflow as tf
from tensorflow.keras.applications import InceptionV3
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input, Dense, Embedding, LSTM, Dropout, Add
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.callbacks import EarlyStopping

# Load pre-trained CNN for image feature extraction
base_model = InceptionV3(weights='imagenet')
cnn_model = Model(base_model.input, base_model.layers[-2].output)

# Freeze CNN layers
for layer in cnn_model.layers:
    layer.trainable = False

# Define inputs
image_input = Input(shape=(299, 299, 3))
image_features = cnn_model(image_input)
image_features = Dropout(0.5)(image_features)  # Added dropout
image_features = Dense(256)(image_features)  # Project to match LSTM output dim

# Text input for captions
caption_input = Input(shape=(max_caption_length,))
caption_embedding = Embedding(input_dim=vocab_size, output_dim=256, mask_zero=True)(caption_input)
caption_lstm = LSTM(256)(caption_embedding)
caption_lstm = Dropout(0.5)(caption_lstm)  # Added dropout

# Combine image and caption features
decoder = Add()([image_features, caption_lstm])
outputs = Dense(vocab_size, activation='softmax')(decoder)

# Define model
model = Model(inputs=[image_input, caption_input], outputs=outputs)

# Compile model with lower learning rate
optimizer = tf.keras.optimizers.Adam(learning_rate=0.0001)
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])

# Early stopping callback
early_stop = EarlyStopping(monitor='val_loss', patience=3, restore_best_weights=True)

# Train model
model.fit(
    [train_images, train_captions], train_targets,
    epochs=20,
    batch_size=64,
    validation_data=([val_images, val_captions], val_targets),
    callbacks=[early_stop]
)
Added dropout layers after image feature extraction and LSTM layers to reduce overfitting.
Lowered the learning rate from 0.001 to 0.0001 for smoother training.
Added early stopping to stop training when validation loss stops improving.
Results Interpretation

Before: Training accuracy was 95%, validation accuracy was 65%, showing overfitting.

After: Training accuracy dropped to 88%, validation accuracy improved to 82%, and validation loss decreased, indicating better generalization.

Adding dropout and early stopping helps the model avoid memorizing training data and improves its ability to describe new images accurately.
Bonus Experiment
Try using data augmentation on the images to artificially increase dataset variety and see if validation accuracy improves further.
💡 Hint
Use simple image transformations like rotation, flipping, or zooming during training to help the model learn more robust features.