0
0
ML Pythonprogramming~20 mins

Classification evaluation (accuracy, precision, recall, F1) in ML Python - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Classification Metrics Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Metrics
intermediate
2:00remaining
Calculate accuracy from confusion matrix
Given the confusion matrix below, what is the accuracy of the classification model?

Confusion matrix:
True Positive (TP) = 40
True Negative (TN) = 50
False Positive (FP) = 10
False Negative (FN) = 5
A0.90
B0.80
C0.85
D0.75
Attempts:
2 left
Metrics
intermediate
2:00remaining
Identify precision from classification results
A model predicted 30 true positives, 10 false positives, and 20 false negatives. What is the precision of the model?
A0.60
B0.75
C0.50
D0.30
Attempts:
2 left
Metrics
advanced
2:00remaining
Calculate recall from confusion matrix
Given the confusion matrix:
TP = 25, FP = 15, FN = 5, TN = 55
What is the recall of the model?
A0.833
B0.625
C0.714
D0.500
Attempts:
2 left
Metrics
advanced
2:00remaining
Compute F1 score from precision and recall
If a model has precision = 0.8 and recall = 0.6, what is the F1 score?
A0.75
B0.70
C0.72
D0.68
Attempts:
2 left
🧠 Conceptual
expert
2:00remaining
Choosing the best metric for imbalanced data
You have a dataset where 95% of the samples belong to one class and 5% to the other. Which metric is most reliable to evaluate a model's performance on the minority class?
AF1 score, because it balances precision and recall
BPrecision, because it measures false positives
CRecall, because it measures how many actual positives are found
DAccuracy, because it shows overall correctness
Attempts:
2 left