0
0
TensorFlowml~10 mins

Why thorough evaluation ensures reliability in TensorFlow - Test Your Understanding

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to compile the TensorFlow model with the correct loss function.

TensorFlow
model.compile(optimizer='adam', loss=[1], metrics=['accuracy'])
Drag options to blanks, or click blank then click option'
A'mse'
B'hinge'
C'mean_absolute_error'
D'sparse_categorical_crossentropy'
Attempts:
3 left
💡 Hint
Common Mistakes
Using a regression loss like 'mse' for classification tasks.
2fill in blank
medium

Complete the code to fit the model on training data for 10 epochs.

TensorFlow
history = model.fit(x_train, y_train, epochs=[1], validation_split=0.2)
Drag options to blanks, or click blank then click option'
A10
B5
C20
D50
Attempts:
3 left
💡 Hint
Common Mistakes
Setting epochs too low or too high without validation.
3fill in blank
hard

Fix the error in the code to evaluate the model on test data and get accuracy.

TensorFlow
test_loss, [1] = model.evaluate(x_test, y_test)
Drag options to blanks, or click blank then click option'
Aaccuracy
Bloss
Cacc
Dscore
Attempts:
3 left
💡 Hint
Common Mistakes
Using variable names that don't match the metric names.
4fill in blank
hard

Fill both blanks to create a confusion matrix and print it.

TensorFlow
from sklearn.metrics import [1]
cm = [2](y_test, y_pred)
print(cm)
Drag options to blanks, or click blank then click option'
Aconfusion_matrix
Bclassification_report
Cconfusion_matrix_score
Daccuracy_score
Attempts:
3 left
💡 Hint
Common Mistakes
Using incorrect function names or metrics for confusion matrix.
5fill in blank
hard

Fill all three blanks to calculate precision, recall, and F1-score.

TensorFlow
from sklearn.metrics import [1], [2], [3]
precision = [1](y_test, y_pred)
recall = [2](y_test, y_pred)
f1 = [3](y_test, y_pred)
print(f"Precision: {precision}\nRecall: {recall}\nF1-score: {f1}")
Drag options to blanks, or click blank then click option'
Aprecision_score
Brecall_score
Cf1_score
Daccuracy_score
Attempts:
3 left
💡 Hint
Common Mistakes
Mixing up metric function names or using accuracy instead of F1-score.