Recall & Review
beginner
What is categorical cross-entropy loss used for in machine learning?
It measures how well a model's predicted probabilities match the true categories when there are multiple classes. It helps the model learn by penalizing wrong predictions more.
Click to reveal answer
intermediate
Write the formula for categorical cross-entropy loss.
Loss = -∑(y_true * log(y_pred)) where y_true is the true label (one-hot encoded) and y_pred is the predicted probability for each class.
Click to reveal answer
beginner
Why do we use one-hot encoding with categorical cross-entropy loss?
One-hot encoding turns the true class into a vector with 1 for the correct class and 0 for others. This helps the loss function compare predicted probabilities directly to the true class.Click to reveal answer
intermediate
How does TensorFlow compute categorical cross-entropy loss?
TensorFlow uses functions like tf.keras.losses.CategoricalCrossentropy which take true labels and predicted probabilities, then calculate the average loss over all samples.
Click to reveal answer
intermediate
What is the difference between categorical cross-entropy and sparse categorical cross-entropy?
Categorical cross-entropy expects one-hot encoded labels, while sparse categorical cross-entropy expects integer labels (class indices). Both measure the same loss but handle labels differently.Click to reveal answer
What type of problem is categorical cross-entropy loss mainly used for?
✗ Incorrect
Categorical cross-entropy is designed for multi-class classification problems where the model predicts probabilities for multiple classes.
Which of these is required for categorical cross-entropy loss input labels?
✗ Incorrect
Categorical cross-entropy expects labels as one-hot encoded vectors representing the true class.
In TensorFlow, which function computes categorical cross-entropy loss?
✗ Incorrect
tf.keras.losses.CategoricalCrossentropy is the built-in function to compute this loss.
What does a lower categorical cross-entropy loss value indicate?
✗ Incorrect
Lower loss means the predicted probabilities are closer to the true labels, indicating better predictions.
Which loss function should you use if your labels are integers instead of one-hot vectors?
✗ Incorrect
Sparse categorical cross-entropy handles integer labels directly without needing one-hot encoding.
Explain in your own words how categorical cross-entropy loss helps a model learn in multi-class classification.
Think about how the loss changes when the model guesses right or wrong.
You got /3 concepts.
Describe the difference between categorical cross-entropy and sparse categorical cross-entropy loss functions and when to use each.
Focus on how labels are represented and what the loss function expects.
You got /3 concepts.