0
0
Computer Visionml~20 mins

Depth estimation basics in Computer Vision - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Depth estimation basics
Problem:Estimate the distance of objects in an image using a simple neural network model trained on synthetic depth data.
Current Metrics:Training loss: 0.15, Validation loss: 0.45
Issue:The model overfits the training data, showing much lower training loss than validation loss, indicating poor generalization.
Your Task
Reduce overfitting by improving validation loss to below 0.30 while keeping training loss above 0.10 to avoid underfitting.
Do not change the dataset or add more data.
Only modify the model architecture or training parameters.
Keep the model simple and runnable on a standard CPU.
Hint 1
Hint 2
Hint 3
Hint 4
Solution
Computer Vision
import numpy as np
import tensorflow as tf
from tensorflow.keras import layers, models, callbacks

# Generate synthetic data
np.random.seed(42)
X_train = np.random.rand(1000, 64, 64, 1).astype(np.float32)
y_train = np.random.rand(1000, 64, 64, 1).astype(np.float32)
X_val = np.random.rand(200, 64, 64, 1).astype(np.float32)
y_val = np.random.rand(200, 64, 64, 1).astype(np.float32)

# Define model with dropout to reduce overfitting
model = models.Sequential([
    layers.Conv2D(16, (3,3), activation='relu', padding='same', input_shape=(64,64,1)),
    layers.MaxPooling2D((2,2)),
    layers.Dropout(0.3),
    layers.Conv2D(32, (3,3), activation='relu', padding='same'),
    layers.MaxPooling2D((2,2)),
    layers.Dropout(0.3),
    layers.Conv2D(64, (3,3), activation='relu', padding='same'),
    layers.UpSampling2D((2,2)),
    layers.Dropout(0.3),
    layers.Conv2D(32, (3,3), activation='relu', padding='same'),
    layers.UpSampling2D((2,2)),
    layers.Conv2D(1, (3,3), activation='sigmoid', padding='same')
])

model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001), loss='mse')

# Early stopping callback
early_stop = callbacks.EarlyStopping(monitor='val_loss', patience=5, restore_best_weights=True)

# Train model
history = model.fit(X_train, y_train, epochs=50, batch_size=32, validation_data=(X_val, y_val), callbacks=[early_stop])
Added dropout layers after convolutional layers to reduce overfitting.
Included early stopping to stop training when validation loss stops improving.
Kept model complexity moderate with fewer filters and pooling layers.
Used Adam optimizer with a learning rate of 0.001 for stable training.
Results Interpretation

Before: Training loss = 0.15, Validation loss = 0.45 (high overfitting)

After: Training loss = 0.12, Validation loss = 0.28 (reduced overfitting, better generalization)

Adding dropout and early stopping helps the model generalize better by preventing it from memorizing training data, which reduces overfitting and improves validation performance.
Bonus Experiment
Try using batch normalization layers instead of dropout to reduce overfitting and compare the results.
💡 Hint
Batch normalization normalizes layer inputs and can stabilize training, sometimes reducing the need for dropout.