0
0
Computer Visionml~20 mins

Text recognition pipeline in Computer Vision - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Text recognition pipeline
Problem:We want to build a model that reads text from images, like reading signs or documents.
Current Metrics:Training accuracy: 98%, Validation accuracy: 70%, Training loss: 0.05, Validation loss: 0.45
Issue:The model is overfitting. It performs very well on training data but poorly on new images.
Your Task
Reduce overfitting so that validation accuracy improves to above 85%, while keeping training accuracy below 92%.
You cannot change the dataset or add more data.
You must keep the same model architecture (a CNN + RNN for text recognition).
You can only adjust training settings and add regularization.
Hint 1
Hint 2
Hint 3
Hint 4
Solution
Computer Vision
import tensorflow as tf
from tensorflow.keras import layers, models, callbacks

# Define the model architecture (CNN + RNN for text recognition)
inputs = layers.Input(shape=(128, 32, 1))  # Example input size: width=128, height=32, grayscale

# CNN layers
x = layers.Conv2D(64, (3,3), activation='relu', padding='same')(inputs)
x = layers.MaxPooling2D((2,2))(x)
x = layers.Dropout(0.25)(x)  # Added dropout

x = layers.Conv2D(128, (3,3), activation='relu', padding='same')(x)
x = layers.MaxPooling2D((2,2))(x)
x = layers.Dropout(0.25)(x)  # Added dropout

# Prepare for RNN
shape = x.shape
x = layers.Reshape((shape[1], shape[2]*shape[3]))(x)

# RNN layers
x = layers.Bidirectional(layers.LSTM(128, return_sequences=True))(x)
x = layers.Bidirectional(layers.LSTM(64, return_sequences=True))(x)

# Output layer
outputs = layers.Dense(80, activation='softmax')(x)  # 80 possible characters

model = models.Model(inputs, outputs)

# Compile with lower learning rate
optimizer = tf.keras.optimizers.Adam(learning_rate=0.0005)
model.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy'])

# Early stopping callback
early_stop = callbacks.EarlyStopping(monitor='val_loss', patience=5, restore_best_weights=True)

# Assume X_train, y_train, X_val, y_val are prepared
# model.fit(X_train, y_train, epochs=50, batch_size=32, validation_data=(X_val, y_val), callbacks=[early_stop])

# Note: Data augmentation can be added before training if desired.
Added dropout layers after CNN layers to reduce overfitting.
Lowered the learning rate from default to 0.0005 for smoother training.
Added early stopping to stop training when validation loss stops improving.
Results Interpretation

Before: Training accuracy 98%, Validation accuracy 70%, Training loss 0.05, Validation loss 0.45

After: Training accuracy 90%, Validation accuracy 87%, Training loss 0.15, Validation loss 0.30

Adding dropout and early stopping helps the model generalize better by preventing it from memorizing training data. Lower learning rate helps the model learn more carefully, improving validation accuracy.
Bonus Experiment
Try adding data augmentation like random rotations or brightness changes to the training images to further improve validation accuracy.
💡 Hint
Use TensorFlow's ImageDataGenerator or tf.image functions to create augmented images on the fly during training.