0
0
TensorFlowml~20 mins

Error analysis patterns in TensorFlow - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Error Analysis Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Identifying common causes of high training loss

Which of the following is least likely to cause a persistently high training loss in a TensorFlow model?

AHaving a dataset with mislabeled examples in the training set
BEvaluating the model on the training data instead of validation data
CUsing a model architecture that is too simple to capture the data patterns
DUsing a learning rate that is too high, causing the model to overshoot minima
Attempts:
2 left
💡 Hint

Think about what affects training loss versus evaluation procedure.

Predict Output
intermediate
2:00remaining
Output of TensorFlow model predictions with incorrect input shape

What error will this TensorFlow code produce when running predictions?

import tensorflow as tf
model = tf.keras.Sequential([
  tf.keras.layers.Dense(5, activation='relu', input_shape=(3,))
])
model.compile(optimizer='adam', loss='mse')

import numpy as np
input_data = np.array([[1, 2]])  # Shape (1, 2), should be (1, 3)
predictions = model.predict(input_data)
AValueError: Input 0 of layer sequential is incompatible with the layer: expected shape=(None, 3), found shape=(None, 2)
BTypeError: Unsupported operand type(s) for +: 'int' and 'str'
CNo error, predictions will run and output shape (1, 5)
DRuntimeError: Model has not been compiled yet
Attempts:
2 left
💡 Hint

Check if the input shape matches the model's expected input shape.

Model Choice
advanced
2:00remaining
Choosing a model to reduce overfitting detected in error analysis

You notice your TensorFlow model performs very well on training data but poorly on validation data. Which model change is most likely to reduce overfitting?

AIncrease the number of layers and neurons to improve model capacity
BRemove batch normalization layers to simplify the model
CAdd dropout layers to randomly disable neurons during training
DUse a smaller batch size during training
Attempts:
2 left
💡 Hint

Think about regularization techniques that prevent overfitting.

Metrics
advanced
2:00remaining
Interpreting confusion matrix metrics for imbalanced data

Given a binary classification confusion matrix:

[[90, 10],
 [40, 60]]

Where rows are true classes and columns are predicted classes, what is the precision for the positive class?

A0.857
B0.4
C0.6
D0.75
Attempts:
2 left
💡 Hint

Precision = True Positives / (True Positives + False Positives)

🔧 Debug
expert
3:00remaining
Debugging exploding gradients in TensorFlow training

During training a deep neural network in TensorFlow, the loss suddenly becomes NaN after several epochs. Which debugging step is most effective to identify and fix exploding gradients?

AIncrease the learning rate to speed up convergence
BSwitch optimizer from Adam to SGD without momentum
CRemove dropout layers to stabilize training
DAdd gradient clipping to limit the gradient values during backpropagation
Attempts:
2 left
💡 Hint

Exploding gradients cause very large updates; controlling gradient size helps.