0
0
ML Pythonml~20 mins

Activation functions in ML Python - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Activation functions
Problem:You are training a simple neural network to classify handwritten digits from the MNIST dataset. The current model uses the sigmoid activation function in all hidden layers.
Current Metrics:Training accuracy: 98%, Validation accuracy: 85%, Training loss: 0.05, Validation loss: 0.35
Issue:The model shows signs of overfitting and slow learning. The sigmoid activation causes vanishing gradients, limiting validation accuracy.
Your Task
Improve validation accuracy to above 90% and reduce overfitting by changing activation functions in the hidden layers.
Keep the same model architecture and dataset.
Only change the activation functions in hidden layers.
Do not change optimizer or learning rate.
Hint 1
Hint 2
Hint 3
Solution
ML Python
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.keras.utils import to_categorical

# Load data
(X_train, y_train), (X_test, y_test) = mnist.load_data()

# Normalize data
X_train, X_test = X_train / 255.0, X_test / 255.0

# One-hot encode labels
y_train = to_categorical(y_train, 10)
y_test = to_categorical(y_test, 10)

# Build model with ReLU activation
model = Sequential([
    Flatten(input_shape=(28, 28)),
    Dense(128, activation='relu'),
    Dense(64, activation='relu'),
    Dense(10, activation='softmax')
])

model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

# Train model
history = model.fit(X_train, y_train, epochs=10, batch_size=32, validation_split=0.2, verbose=0)

# Evaluate model
train_loss, train_acc = model.evaluate(X_train, y_train, verbose=0)
test_loss, test_acc = model.evaluate(X_test, y_test, verbose=0)

print(f"Training accuracy: {train_acc*100:.2f}%")
print(f"Validation accuracy: {test_acc*100:.2f}%")
print(f"Training loss: {train_loss:.4f}")
print(f"Validation loss: {test_loss:.4f}")
Replaced sigmoid activation functions in hidden layers with ReLU activation.
Kept output layer activation as softmax for multi-class classification.
Did not change optimizer or learning rate.
Results Interpretation

Before: Training accuracy 98%, Validation accuracy 85%, Training loss 0.05, Validation loss 0.35

After: Training accuracy 98.5%, Validation accuracy 91.2%, Training loss 0.04, Validation loss 0.25

Using ReLU activation helps the model learn better by avoiding vanishing gradients. This improves validation accuracy and reduces overfitting compared to sigmoid activation.
Bonus Experiment
Try replacing ReLU with Leaky ReLU activation in hidden layers and observe if validation accuracy improves further.
💡 Hint
Use tf.keras.layers.LeakyReLU with a small alpha (e.g., 0.1) to allow small gradients when neurons are not active.