Imagine you trained a model to recognize cats and dogs. Why should you test it on new pictures it hasn't seen before?
Think about whether the model can guess right on new, different pictures.
Testing on unseen data helps us know if the model learned general patterns or just memorized the training examples. This ensures the model will work well in real life.
You have a model that detects rare diseases. Most people are healthy, so the data is imbalanced. Which metric helps you understand if the model is reliable?
Think about a metric that balances both false alarms and missed cases.
F1 Score balances precision and recall, making it useful when classes are imbalanced. Accuracy can be misleading if most data belongs to one class.
Consider this TensorFlow code that evaluates a model on test data and prints accuracy. What will it print?
import tensorflow as tf import numpy as np # Dummy test data x_test = np.random.random((10, 5)) y_test = np.array([1, 0, 1, 0, 1, 0, 1, 0, 1, 0]) # Simple model model = tf.keras.Sequential([ tf.keras.layers.Dense(1, activation='sigmoid', input_shape=(5,)) ]) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) # Fake training to set weights model.fit(x_test, y_test, epochs=1, verbose=0) # Evaluate loss, accuracy = model.evaluate(x_test, y_test, verbose=0) print(f"Accuracy: {accuracy:.2f}")
The model is very simple and trained only briefly on small data.
The model is a single neuron with sigmoid activation trained for 1 epoch on random data. It will not perfectly fit the data, so accuracy will be around chance level (0.5).
During training, you set the model to validate more often on the validation set. What is the main effect of this?
Think about what happens when you check the model's accuracy more often during training.
More frequent validation gives better insight into how the model improves but adds overhead, making training slower. It does not guarantee no overfitting.
Look at this code snippet that tries to evaluate a model. Why does it raise an error?
import tensorflow as tf model = tf.keras.Sequential([ tf.keras.layers.Dense(1, activation='sigmoid', input_shape=(3,)) ]) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) # Missing training step x_test = tf.random.normal((5, 3)) y_test = tf.constant([1, 0, 1, 0, 1]) loss, accuracy = model.evaluate(x_test, y_test, verbose=0) print(f"Accuracy: {accuracy:.2f}")
Think about what happens if you evaluate a model before training it.
Evaluating an untrained model is allowed but the model weights are random, so the loss and accuracy are meaningless. However, TensorFlow does not raise an error here. The error is a ValueError if the labels and predictions shapes mismatch or data types are wrong.
In this code, the shapes and types are correct, so no error is raised. The model will output random predictions, so accuracy will be low but no error.
Therefore, the correct answer is that no error occurs and accuracy is low.