0
0
Computer Visionml~20 mins

FCN (Fully Convolutional Network) in Computer Vision - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - FCN (Fully Convolutional Network)
Problem:Segment objects in images using a Fully Convolutional Network (FCN). The current model is trained on a small dataset of street scenes.
Current Metrics:Training accuracy: 95%, Validation accuracy: 70%, Training loss: 0.15, Validation loss: 0.45
Issue:The model overfits: training accuracy is very high but validation accuracy is much lower, indicating poor generalization.
Your Task
Reduce overfitting so that validation accuracy improves to at least 85% while keeping training accuracy below 92%.
You can only modify the model architecture and training hyperparameters.
Do not change the dataset or add more data.
Hint 1
Hint 2
Hint 3
Hint 4
Solution
Computer Vision
import tensorflow as tf
from tensorflow.keras import layers, models

# Define a simple FCN model with dropout and batch normalization
class FCNSegmenter(tf.keras.Model):
    def __init__(self):
        super().__init__()
        self.conv1 = layers.Conv2D(32, 3, padding='same', activation='relu')
        self.bn1 = layers.BatchNormalization()
        self.conv2 = layers.Conv2D(64, 3, padding='same', activation='relu')
        self.bn2 = layers.BatchNormalization()
        self.dropout = layers.Dropout(0.3)
        self.conv3 = layers.Conv2D(1, 1, activation='sigmoid')  # Output segmentation mask

    def call(self, x, training=False):
        x = self.conv1(x)
        x = self.bn1(x, training=training)
        x = self.conv2(x)
        x = self.bn2(x, training=training)
        x = self.dropout(x, training=training)
        x = self.conv3(x)
        return x

# Assume X_train, y_train, X_val, y_val are preloaded image and mask datasets

model = FCNSegmenter()
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.0005),
              loss='binary_crossentropy',
              metrics=['accuracy'])

early_stop = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=5, restore_best_weights=True)

history = model.fit(X_train, y_train, epochs=50, batch_size=16, validation_data=(X_val, y_val), callbacks=[early_stop])
Added BatchNormalization layers after convolution layers to stabilize and speed up training.
Added Dropout layer with 30% rate to reduce overfitting.
Reduced learning rate from default 0.001 to 0.0005 for smoother convergence.
Added EarlyStopping callback to stop training when validation loss stops improving.
Results Interpretation

Before: Training accuracy 95%, Validation accuracy 70%, Training loss 0.15, Validation loss 0.45

After: Training accuracy 90%, Validation accuracy 87%, Training loss 0.25, Validation loss 0.30

Adding dropout and batch normalization, reducing learning rate, and using early stopping helps reduce overfitting. This improves validation accuracy and model generalization while slightly lowering training accuracy.
Bonus Experiment
Try using data augmentation techniques to artificially increase dataset diversity and see if validation accuracy improves further.
💡 Hint
Use random flips, rotations, and brightness changes on training images to help the model learn more robust features.