Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to calculate accuracy from predictions and labels.
Prompt Engineering / GenAI
accuracy = sum(predictions == [1]) / len(predictions)
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using predictions instead of labels for comparison
Dividing by wrong length
✗ Incorrect
Accuracy compares predicted labels to the true labels.
2fill in blank
mediumComplete the code to compute precision score using sklearn.
Prompt Engineering / GenAI
precision = precision_score([1], predictions) Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Swapping predictions and labels
Using wrong variable names
✗ Incorrect
Precision compares true labels (first argument) to predictions.
3fill in blank
hardFix the error in computing F1 score by filling the missing argument.
Prompt Engineering / GenAI
f1 = f1_score(labels, predictions, average=[1]) Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'macro' or 'micro' for binary tasks
Omitting the average argument
✗ Incorrect
For binary classification, average='binary' is required.
4fill in blank
hardFill both blanks to create a dictionary of recall scores for each class.
Prompt Engineering / GenAI
recall_scores = {cls: recall_score(labels, predictions, average=[1], labels=[cls]) for cls in [2] Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'macro' average which averages all classes
Not iterating over class labels
✗ Incorrect
Recall per class uses average='binary' and iterates over classes.
5fill in blank
hardFill all three blanks to compute a confusion matrix and extract true positives.
Prompt Engineering / GenAI
cm = confusion_matrix([1], [2]) true_positives = cm[[3], [3]]
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Swapping labels and predictions
Using wrong index for true positives
✗ Incorrect
The confusion matrix compares true labels and predictions. True positives for class 0 are at cm[0, 0].