Challenge - 5 Problems
Categorical Cross-Entropy Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
❓ Predict Output
intermediate2:00remaining
Output of categorical cross-entropy loss calculation
What is the output value of the categorical cross-entropy loss for the given true labels and predicted probabilities?
TensorFlow
import tensorflow as tf true_labels = tf.constant([[0, 1, 0], [1, 0, 0]], dtype=tf.float32) predicted_probs = tf.constant([[0.1, 0.8, 0.1], [0.7, 0.2, 0.1]], dtype=tf.float32) loss_fn = tf.keras.losses.CategoricalCrossentropy() loss_value = loss_fn(true_labels, predicted_probs).numpy() print(loss_value)
Attempts:
2 left
💡 Hint
Recall that categorical cross-entropy measures the difference between true labels and predicted probabilities using the negative log likelihood.
✗ Incorrect
The categorical cross-entropy loss is calculated as the average negative log probability of the true class. For the given inputs, the loss is approximately 0.2899.
❓ Model Choice
intermediate1:30remaining
Choosing the correct model output activation for categorical cross-entropy
Which activation function should the model's output layer use when training with categorical cross-entropy loss on multi-class classification?
Attempts:
2 left
💡 Hint
Think about how probabilities for multiple classes should sum up.
✗ Incorrect
Softmax converts raw outputs into probabilities that sum to 1, which is required for categorical cross-entropy loss in multi-class classification.
❓ Hyperparameter
advanced1:30remaining
Effect of label smoothing on categorical cross-entropy loss
What is the main effect of applying label smoothing when using categorical cross-entropy loss during training?
Attempts:
2 left
💡 Hint
Think about how smoothing changes the target labels.
✗ Incorrect
Label smoothing replaces hard 0/1 labels with softer values, which reduces overconfidence and helps generalization.
🔧 Debug
advanced2:00remaining
Identifying the error in categorical cross-entropy loss usage
What error will this code raise when computing categorical cross-entropy loss?
TensorFlow
import tensorflow as tf true_labels = tf.constant([1, 0, 0]) predicted_probs = tf.constant([[0.7, 0.2, 0.1]]) loss_fn = tf.keras.losses.CategoricalCrossentropy() loss_value = loss_fn(true_labels, predicted_probs).numpy() print(loss_value)
Attempts:
2 left
💡 Hint
Check the shapes and types of true labels and predictions.
✗ Incorrect
The true labels tensor shape is (3,) but predictions shape is (1,3), causing a shape mismatch error.
🧠 Conceptual
expert1:30remaining
Why use categorical cross-entropy instead of sparse categorical cross-entropy?
In which scenario is categorical cross-entropy loss preferred over sparse categorical cross-entropy loss?
Attempts:
2 left
💡 Hint
Consider the format of the true labels expected by each loss function.
✗ Incorrect
Categorical cross-entropy expects one-hot encoded labels, while sparse categorical cross-entropy expects integer class indices.