0
0
ML Pythonprogramming~20 mins

ROC curve and AUC in ML Python - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
ROC and AUC Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Understanding the ROC Curve

What does the ROC curve represent in a binary classification model?

AThe trade-off between true positive rate and false positive rate at various thresholds
BThe relationship between precision and recall for different classes
CThe distribution of predicted probabilities for positive class only
DThe accuracy of the model over different training epochs
Attempts:
2 left
Predict Output
intermediate
2:00remaining
Calculating AUC from Model Predictions

Given the following true labels and predicted probabilities, what is the AUC score?

y_true = [0, 0, 1, 1]
y_scores = [0.1, 0.4, 0.35, 0.8]
ML Python
from sklearn.metrics import roc_auc_score

y_true = [0, 0, 1, 1]
y_scores = [0.1, 0.4, 0.35, 0.8]
auc = roc_auc_score(y_true, y_scores)
print(round(auc, 2))
A0.95
B0.85
C0.65
D0.75
Attempts:
2 left
Metrics
advanced
2:00remaining
Interpreting AUC Values

Which statement best describes an AUC value of 0.5 for a binary classifier?

AThe model perfectly separates positive and negative classes
BThe model performs no better than random guessing
CThe model has high precision but low recall
DThe model has a high false positive rate
Attempts:
2 left
🔧 Debug
advanced
2:00remaining
Fixing Incorrect ROC Curve Plot

Consider this code snippet to plot an ROC curve. What is the main issue causing the plot to be incorrect?

from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt

y_true = [0, 1, 0, 1]
y_scores = [0.2, 0.8, 0.4, 0.6]
fpr, tpr, thresholds = roc_curve(y_true, y_true)
plt.plot(fpr, tpr)
plt.show()
AThe roc_curve function is called with true labels instead of predicted scores
BThe matplotlib import is incorrect
CThe y_true list contains invalid values
DThe plot function is missing labels for axes
Attempts:
2 left
Model Choice
expert
3:00remaining
Choosing Model Based on AUC for Imbalanced Data

You have two binary classifiers evaluated on a highly imbalanced dataset. Model A has an AUC of 0.92, and Model B has an AUC of 0.85. However, Model B has higher precision on the minority class. Which model should you choose if your priority is to correctly identify positive cases while minimizing false alarms?

AChoose Model A because it has higher recall on the minority class
BChoose Model A because higher AUC means better overall ranking
CChoose Model B because higher precision reduces false alarms on positives
DChoose Model B because AUC is not useful for imbalanced data
Attempts:
2 left