Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to calculate accuracy score for predictions.
NLP
from sklearn.metrics import [1] true_labels = [0, 1, 1, 0, 1] pred_labels = [0, 0, 1, 0, 1] acc = [1](true_labels, pred_labels) print(f"Accuracy: {acc}")
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using f1_score instead of accuracy_score for accuracy calculation.
Confusing confusion_matrix output with accuracy value.
✗ Incorrect
The accuracy_score function calculates the ratio of correct predictions to total predictions.
2fill in blank
mediumComplete the code to calculate F1 score for binary classification.
NLP
from sklearn.metrics import f1_score true_labels = [1, 0, 1, 1, 0] pred_labels = [1, 0, 0, 1, 0] f1 = f1_score(true_labels, pred_labels, [1]='binary') print(f"F1 Score: {f1}")
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'pos_label' instead of 'average' parameter.
Omitting the 'average' parameter causing errors in multiclass.
✗ Incorrect
The 'average' parameter specifies the type of averaging performed on the data for F1 score calculation.
3fill in blank
hardFix the error in the code to correctly compute confusion matrix.
NLP
from sklearn.metrics import confusion_matrix true = [1, 0, 1, 1, 0] pred = [1, 1, 1, 0, 0] cm = confusion_matrix([1], pred) print(cm)
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Swapping true and predicted labels in confusion_matrix function.
Passing parameters unrelated to labels.
✗ Incorrect
The first argument to confusion_matrix must be the true labels, followed by predicted labels.
4fill in blank
hardFill both blanks to create a confusion matrix and extract true positives.
NLP
from sklearn.metrics import confusion_matrix true = [0, 1, 0, 1, 1] pred = [0, 0, 0, 1, 1] cm = confusion_matrix(true, pred) true_positives = cm[1][2] print(f"True Positives: {true_positives}")
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using cm[0][0] which is true negatives, not true positives.
Using incorrect indices causing index errors.
✗ Incorrect
True positives are located at row 1, column 1 in the confusion matrix, accessed by cm[1][1].
5fill in blank
hardFill all three blanks to compute accuracy, F1 score, and confusion matrix for predictions.
NLP
from sklearn.metrics import [1], [2], [3] true = [1, 0, 1, 0, 1] pred = [1, 0, 0, 0, 1] acc = [1](true, pred) f1 = [2](true, pred, average='binary') cm = [3](true, pred) print(f"Accuracy: {acc}") print(f"F1 Score: {f1}") print(f"Confusion Matrix:\n{cm}")
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Mixing up metric function names or parameters.
Forgetting to specify average='binary' for f1_score.
✗ Incorrect
Use accuracy_score for accuracy, f1_score for F1, and confusion_matrix for confusion matrix.