0
0
NLPml~10 mins

Evaluation metrics (accuracy, F1, confusion matrix) in NLP - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to calculate accuracy score for predictions.

NLP
from sklearn.metrics import [1]

true_labels = [0, 1, 1, 0, 1]
pred_labels = [0, 0, 1, 0, 1]

acc = [1](true_labels, pred_labels)
print(f"Accuracy: {acc}")
Drag options to blanks, or click blank then click option'
Aconfusion_matrix
Bf1_score
Caccuracy_score
Dclassification_report
Attempts:
3 left
💡 Hint
Common Mistakes
Using f1_score instead of accuracy_score for accuracy calculation.
Confusing confusion_matrix output with accuracy value.
2fill in blank
medium

Complete the code to calculate F1 score for binary classification.

NLP
from sklearn.metrics import f1_score

true_labels = [1, 0, 1, 1, 0]
pred_labels = [1, 0, 0, 1, 0]

f1 = f1_score(true_labels, pred_labels, [1]='binary')
print(f"F1 Score: {f1}")
Drag options to blanks, or click blank then click option'
Aaverage
Bpos_label
Clabels
Dzero_division
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'pos_label' instead of 'average' parameter.
Omitting the 'average' parameter causing errors in multiclass.
3fill in blank
hard

Fix the error in the code to correctly compute confusion matrix.

NLP
from sklearn.metrics import confusion_matrix

true = [1, 0, 1, 1, 0]
pred = [1, 1, 1, 0, 0]

cm = confusion_matrix([1], pred)
print(cm)
Drag options to blanks, or click blank then click option'
Atrue
Bpred
Caverage
Dlabels
Attempts:
3 left
💡 Hint
Common Mistakes
Swapping true and predicted labels in confusion_matrix function.
Passing parameters unrelated to labels.
4fill in blank
hard

Fill both blanks to create a confusion matrix and extract true positives.

NLP
from sklearn.metrics import confusion_matrix

true = [0, 1, 0, 1, 1]
pred = [0, 0, 0, 1, 1]

cm = confusion_matrix(true, pred)
true_positives = cm[1][2]
print(f"True Positives: {true_positives}")
Drag options to blanks, or click blank then click option'
A[1]
B[1, 1]
C[0, 0]
D[0]
Attempts:
3 left
💡 Hint
Common Mistakes
Using cm[0][0] which is true negatives, not true positives.
Using incorrect indices causing index errors.
5fill in blank
hard

Fill all three blanks to compute accuracy, F1 score, and confusion matrix for predictions.

NLP
from sklearn.metrics import [1], [2], [3]

true = [1, 0, 1, 0, 1]
pred = [1, 0, 0, 0, 1]

acc = [1](true, pred)
f1 = [2](true, pred, average='binary')
cm = [3](true, pred)

print(f"Accuracy: {acc}")
print(f"F1 Score: {f1}")
print(f"Confusion Matrix:\n{cm}")
Drag options to blanks, or click blank then click option'
Aaccuracy_score
Bf1_score
Cconfusion_matrix
Dclassification_report
Attempts:
3 left
💡 Hint
Common Mistakes
Mixing up metric function names or parameters.
Forgetting to specify average='binary' for f1_score.