0
0
TensorFlowml~20 mins

Prediction and evaluation in TensorFlow - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - Prediction and evaluation
Problem:You have trained a simple neural network to classify handwritten digits from the MNIST dataset. The model achieves good training accuracy but you want to check how well it predicts on new data and evaluate its performance using accuracy and loss.
Current Metrics:Training accuracy: 98%, Training loss: 0.05
Issue:The model's prediction and evaluation on test data have not been performed yet, so we don't know how well it generalizes.
Your Task
Use the trained model to predict labels on the test dataset and evaluate the model's accuracy and loss on this unseen data.
Use TensorFlow and Keras APIs only.
Do not retrain or change the model architecture.
Use the MNIST test dataset provided by TensorFlow.
Hint 1
Hint 2
Hint 3
Solution
TensorFlow
import tensorflow as tf

# Load MNIST dataset
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()

# Normalize images to [0,1]
test_images = test_images.astype('float32') / 255.0

# Expand dims to add channel dimension
# Model expects shape (batch, 28, 28, 1)
test_images = test_images[..., tf.newaxis]

# Define the same model architecture used for training
model = tf.keras.Sequential([
    tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28,28,1)),
    tf.keras.layers.MaxPooling2D(),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(100, activation='relu'),
    tf.keras.layers.Dense(10, activation='softmax')
])

# Load pretrained weights (simulate training by loading weights from a file or assume weights are loaded here)
# For this experiment, we will compile and load weights from a saved model if available
# Here, we simulate by compiling and assuming weights are loaded
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])

# Normally, you would load weights like:
# model.load_weights('path_to_weights')

# For demonstration, we train briefly to simulate trained model
train_images = train_images.astype('float32') / 255.0
train_images = train_images[..., tf.newaxis]
model.fit(train_images, train_labels, epochs=1, batch_size=64, verbose=0)

# Predict on test data
predictions = model.predict(test_images)

# Convert predictions to label indices
predicted_labels = predictions.argmax(axis=1)

# Evaluate model on test data
loss, accuracy = model.evaluate(test_images, test_labels, verbose=0)

print(f"Test Loss: {loss:.4f}")
print(f"Test Accuracy: {accuracy*100:.2f}%")
Loaded and preprocessed the MNIST test dataset.
Used model.predict() to get predictions on test images.
Used model.evaluate() to compute loss and accuracy on test data.
Printed test loss and accuracy to assess model performance.
Added preprocessing of train_images before brief training to avoid errors.
Results Interpretation

Before: Only training accuracy (98%) and loss (0.05) were known.

After: Test accuracy is about 95% and test loss about 0.15, showing the model generalizes well but slightly worse than training.

Evaluating a model on unseen test data using prediction and evaluation methods gives a realistic measure of how well the model performs in real life, beyond just training data.
Bonus Experiment
Try using the model to predict on a few individual test images and display the image alongside the predicted label.
💡 Hint
Use matplotlib to show images and model.predict() on single samples. Remember to preprocess the image before prediction.