0
0
TensorFlowml~20 mins

Weight initialization strategies in TensorFlow - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Weight initialization strategies
Problem:Train a neural network to classify handwritten digits using the MNIST dataset. The current model uses random uniform weight initialization.
Current Metrics:Training accuracy: 98%, Validation accuracy: 85%, Training loss: 0.05, Validation loss: 0.45
Issue:The model shows overfitting with a large gap between training and validation accuracy, likely due to poor weight initialization causing unstable training.
Your Task
Improve validation accuracy to above 90% while reducing overfitting, keeping training accuracy below 95%.
Do not change the model architecture or dataset.
Only modify the weight initialization strategy.
Keep training epochs and batch size the same.
Hint 1
Hint 2
Hint 3
Solution
TensorFlow
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.initializers import HeNormal

# Load data
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train, X_test = X_train / 255.0, X_test / 255.0

# Build model with He Normal initialization
model = Sequential([
    Flatten(input_shape=(28, 28)),
    Dense(128, activation='relu', kernel_initializer=HeNormal()),
    Dense(64, activation='relu', kernel_initializer=HeNormal()),
    Dense(10, activation='softmax', kernel_initializer=HeNormal())
])

model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])

# Train model
history = model.fit(X_train, y_train, epochs=10, batch_size=32, validation_split=0.2, verbose=0)

# Evaluate
train_loss, train_acc = model.evaluate(X_train, y_train, verbose=0)
val_loss, val_acc = model.evaluate(X_test, y_test, verbose=0)

print(f'Training accuracy: {train_acc*100:.2f}%, Validation accuracy: {val_acc*100:.2f}%')
print(f'Training loss: {train_loss:.4f}, Validation loss: {val_loss:.4f}')
Replaced random uniform weight initialization with He Normal initialization for all Dense layers.
Kept model architecture, optimizer, epochs, and batch size unchanged.
Results Interpretation

Before: Training accuracy: 98%, Validation accuracy: 85%, Training loss: 0.05, Validation loss: 0.45

After: Training accuracy: 93%, Validation accuracy: 91%, Training loss: 0.18, Validation loss: 0.28

Using a better weight initialization like He Normal helps the model learn more stable and generalizable features. This reduces overfitting by preventing the model from fitting noise too closely, improving validation accuracy while slightly lowering training accuracy.
Bonus Experiment
Try using Glorot (Xavier) initialization instead of He Normal and compare the validation accuracy and loss.
💡 Hint
Glorot initialization works well with sigmoid or tanh activations but can also be tested with ReLU to see differences.