You trained two image classification models. Model A has 85% accuracy and 0.35 loss. Model B has 82% accuracy and 0.28 loss. Which model is generally better?
Think about what accuracy and loss represent and which one directly shows correct predictions.
Accuracy shows the percentage of correct predictions, so higher accuracy usually means better performance. Loss measures error but can be less intuitive. Here, Model A's higher accuracy makes it better overall.
What is the printed accuracy after running this code?
from sklearn.metrics import accuracy_score y_true = [0, 1, 1, 0, 1] y_pred = [0, 1, 0, 0, 1] accuracy = accuracy_score(y_true, y_pred) print(f"Accuracy: {accuracy:.2f}")
Count how many predictions match the true labels and divide by total.
Out of 5 labels, 4 predictions match (positions 0,1,3,4). So accuracy = 4/5 = 0.80.
You have a dataset where 95% of images are class A and 5% are class B. You trained two models:
- Model X: 95% accuracy, but poor recall on class B.
- Model Y: 90% accuracy, but high recall on class B.
Which model is better for detecting class B?
Think about which metric matters more when one class is rare.
Recall measures how many actual positives are found. For rare classes, high recall is important to catch them, so Model Y is better despite lower accuracy.
You train two identical neural networks on the same data but with different batch sizes: 16 and 256. Which effect is expected when using batch size 256 compared to 16?
Think about how batch size affects speed and model updates.
Larger batch sizes speed up training per epoch but can reduce the quality of updates, sometimes leading to worse accuracy.
You trained a convolutional neural network on a small dataset. Training accuracy is 98%, but validation accuracy is 60%. Which is the most likely cause?
Think about what happens when training accuracy is very high but validation is low.
High training accuracy with low validation accuracy usually means the model memorizes training data but fails to generalize, indicating overfitting.