Complete the code to calculate the accuracy of a classification model.
accuracy = accuracy_score(y_true, [1])The accuracy_score function compares the true labels y_true with the predicted labels y_pred to calculate accuracy.
Complete the code to split the dataset into training and testing sets.
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=[1], random_state=42)
The test_size parameter defines the proportion of the dataset to include in the test split. 0.25 means 25% test data and 75% training data.
Fix the error in the code to compute the confusion matrix correctly.
cm = confusion_matrix(y_true, [1])The confusion matrix compares true labels y_true with predicted labels y_pred. Using other variables causes incorrect results or errors.
Fill both blanks to create a dictionary comprehension that maps each class to its precision score.
precision_scores = {cls: precision_score(y_true, y_pred, labels=[cls], average=[1]) for cls in [2]The average='macro' calculates precision for each class independently. The variable classes holds the list of class labels to iterate over.
Fill all three blanks to compute the F1 score with macro averaging and print the result.
f1 = f1_score(y_true, [1], average=[2]) print('F1 Score:', [3])
The F1 score is computed using predicted labels y_pred and macro averaging to treat all classes equally. The variable f1 holds the score to print.