0
0
TensorFlowml~20 mins

Categorical cross-entropy loss in TensorFlow - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Categorical Cross-Entropy Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
Output of categorical cross-entropy loss calculation
What is the output value of the categorical cross-entropy loss for the given true labels and predicted probabilities?
TensorFlow
import tensorflow as tf

true_labels = tf.constant([[0, 1, 0], [1, 0, 0]], dtype=tf.float32)
predicted_probs = tf.constant([[0.1, 0.8, 0.1], [0.7, 0.2, 0.1]], dtype=tf.float32)

loss_fn = tf.keras.losses.CategoricalCrossentropy()
loss_value = loss_fn(true_labels, predicted_probs).numpy()
print(loss_value)
A0.5108256
B0.22314353
C0.28990925
D0.6931472
Attempts:
2 left
💡 Hint
Recall that categorical cross-entropy measures the difference between true labels and predicted probabilities using the negative log likelihood.
Model Choice
intermediate
1:30remaining
Choosing the correct model output activation for categorical cross-entropy
Which activation function should the model's output layer use when training with categorical cross-entropy loss on multi-class classification?
ASoftmax
BSigmoid
CReLU
DLinear
Attempts:
2 left
💡 Hint
Think about how probabilities for multiple classes should sum up.
Hyperparameter
advanced
1:30remaining
Effect of label smoothing on categorical cross-entropy loss
What is the main effect of applying label smoothing when using categorical cross-entropy loss during training?
AIt speeds up training by reducing the number of classes.
BIt prevents overfitting by making the labels less confident, distributing some probability mass to other classes.
CIt increases the confidence of the model predictions by sharpening the labels.
DIt converts categorical cross-entropy into mean squared error loss.
Attempts:
2 left
💡 Hint
Think about how smoothing changes the target labels.
🔧 Debug
advanced
2:00remaining
Identifying the error in categorical cross-entropy loss usage
What error will this code raise when computing categorical cross-entropy loss?
TensorFlow
import tensorflow as tf

true_labels = tf.constant([1, 0, 0])
predicted_probs = tf.constant([[0.7, 0.2, 0.1]])

loss_fn = tf.keras.losses.CategoricalCrossentropy()
loss_value = loss_fn(true_labels, predicted_probs).numpy()
print(loss_value)
ARuntimeError due to invalid probability values
BTypeError because true_labels is not a float tensor
CNo error, outputs a valid loss value
DValueError due to shape mismatch between true labels and predictions
Attempts:
2 left
💡 Hint
Check the shapes and types of true labels and predictions.
🧠 Conceptual
expert
1:30remaining
Why use categorical cross-entropy instead of sparse categorical cross-entropy?
In which scenario is categorical cross-entropy loss preferred over sparse categorical cross-entropy loss?
AWhen true labels are one-hot encoded vectors
BWhen using binary classification with two classes
CWhen the model output is a single scalar value
DWhen true labels are provided as integer class indices
Attempts:
2 left
💡 Hint
Consider the format of the true labels expected by each loss function.