Complete the code to load the MNIST dataset for handwriting recognition.
from tensorflow.keras.datasets import [1] (train_images, train_labels), (test_images, test_labels) = [1].load_data()
The MNIST dataset contains handwritten digit images commonly used for handwriting recognition tasks.
Complete the code to normalize the pixel values of the images between 0 and 1.
train_images = train_images.astype('float32') / [1] test_images = test_images.astype('float32') / [1]
Pixel values range from 0 to 255, so dividing by 255 scales them to 0-1.
Fix the error in the model definition by choosing the correct activation function for the output layer.
from tensorflow.keras import models, layers model = models.Sequential([ layers.Flatten(input_shape=(28, 28)), layers.Dense(128, activation='relu'), layers.Dense(10, activation=[1]) ])
For multi-class classification like digits 0-9, 'softmax' activation outputs probabilities for each class.
Fill both blanks to compile the model with the correct loss function and optimizer for handwriting recognition.
model.compile(optimizer=[1], loss=[2], metrics=['accuracy'])
Adam optimizer is popular for training neural networks. Sparse categorical crossentropy is used for multi-class classification with integer labels.
Fill all three blanks to train the model for 5 epochs with a batch size of 64 and validate on test data.
history = model.fit(train_images, train_labels, epochs=[1], batch_size=[2], validation_data=([3], test_labels))
Training for 5 epochs with batch size 64 is common. Validation data uses test images and labels.