What does the ROC curve represent in a binary classification model?
Think about what happens when you change the decision threshold in classification.
The ROC curve plots the true positive rate (sensitivity) against the false positive rate (1 - specificity) at various threshold settings, showing the trade-off between detecting positives and avoiding false alarms.
What is the output of the following TensorFlow code snippet?
import tensorflow as tf labels = [0, 0, 1, 1] predictions = [0.1, 0.4, 0.35, 0.8] auc_metric = tf.keras.metrics.AUC() auc_metric.update_state(labels, predictions) result = auc_metric.result().numpy() print(round(result, 2))
Calculate the area under the ROC curve for the given predictions and labels.
The AUC value here is 0.75, representing the probability that a randomly chosen positive instance ranks higher than a randomly chosen negative one.
You trained three binary classifiers and obtained these ROC AUC scores on the validation set: Model A: 0.82, Model B: 0.91, Model C: 0.88. Which model should you select if you want the best overall ability to distinguish classes?
Higher AUC means better class separation ability.
Model B has the highest ROC AUC score (0.91), indicating it best separates positive and negative classes.
How does changing the classification threshold affect the ROC curve?
Think about what happens when you decide to be more or less strict in classifying positives.
Adjusting the threshold moves the operating point along the ROC curve, altering true positive and false positive rates but not the curve shape.
What error does the following TensorFlow code produce?
import tensorflow as tf labels = [0, 1, 0, 1] predictions = [0.2, 0.8, 0.4] auc_metric = tf.keras.metrics.AUC() auc_metric.update_state(labels, predictions) print(auc_metric.result().numpy())
Check if labels and predictions have the same number of elements.
The code raises a ValueError because labels and predictions lists have different lengths (4 vs 3), which is invalid for metric calculation.