Which of the following is least likely to cause a persistently high training loss in a TensorFlow model?
Think about what affects training loss versus evaluation procedure.
Evaluating on training data does not cause high training loss; it just measures it. The other options directly cause the model to perform poorly during training.
What error will this TensorFlow code produce when running predictions?
import tensorflow as tf model = tf.keras.Sequential([ tf.keras.layers.Dense(5, activation='relu', input_shape=(3,)) ]) model.compile(optimizer='adam', loss='mse') import numpy as np input_data = np.array([[1, 2]]) # Shape (1, 2), should be (1, 3) predictions = model.predict(input_data)
Check if the input shape matches the model's expected input shape.
The model expects input shape (None, 3) but receives (None, 2), causing a ValueError about incompatible input shape.
You notice your TensorFlow model performs very well on training data but poorly on validation data. Which model change is most likely to reduce overfitting?
Think about regularization techniques that prevent overfitting.
Dropout is a regularization method that helps reduce overfitting by randomly disabling neurons during training, forcing the model to generalize better.
Given a binary classification confusion matrix:
[[90, 10], [40, 60]]
Where rows are true classes and columns are predicted classes, what is the precision for the positive class?
Precision = True Positives / (True Positives + False Positives)
True Positives = 60, False Positives = 10, so precision = 60 / (60 + 10) ≈ 0.857.
During training a deep neural network in TensorFlow, the loss suddenly becomes NaN after several epochs. Which debugging step is most effective to identify and fix exploding gradients?
Exploding gradients cause very large updates; controlling gradient size helps.
Gradient clipping limits the size of gradients during backpropagation, preventing them from becoming too large and causing NaN loss.