Complete the code to import the function that computes the confusion matrix.
from sklearn.metrics import [1]
The confusion_matrix function from sklearn.metrics is used to compute the confusion matrix for classification results.
Complete the code to calculate accuracy from true and predicted labels.
accuracy = sum(y_true == y_pred) / [1]
Accuracy is the number of correct predictions divided by the total number of true labels, which is len(y_true).
Fix the error in the code to correctly compute the confusion matrix.
cm = confusion_matrix([1], y_pred)The first argument to confusion_matrix must be the true labels y_true, followed by predicted labels.
Fill both blanks to create a dictionary comprehension that counts true positives for each class.
true_positives = {cls: sum((y_true == cls) & ([1] == cls)) for cls in [2]We compare predicted labels y_pred to each class and iterate over all classes stored in classes to count true positives.
Fill all three blanks to compute precision for each class from the confusion matrix.
precision = {j: cm[[1], [2]] / sum(cm[:, [3]]) for j in classes}Precision for each class is true positives cm[j, j] divided by the sum of predicted positives sum(cm[:, j]). Here, j represents the class index.