Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to calculate the accuracy of a model's predictions.
ML Python
accuracy = [1](y_true, y_pred) Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using mean_squared_error for classification accuracy
Confusing accuracy_score with log_loss
✗ Incorrect
The accuracy_score function calculates the fraction of correct predictions.
2fill in blank
mediumComplete the code to plot the model's loss over epochs using matplotlib.
ML Python
plt.plot(history.history['[1]'])
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Plotting accuracy instead of loss
Using validation keys when plotting training loss
✗ Incorrect
The loss key contains the training loss values per epoch.
3fill in blank
hardFix the error in the code to compute the F1 score for binary classification.
ML Python
f1 = f1_score(y_true, [1], average='binary')
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Passing true labels twice
Using predicted probabilities instead of labels
✗ Incorrect
The f1_score function requires the predicted labels as the second argument.
4fill in blank
hardFill both blanks to create a dictionary comprehension that tracks precision and recall for each class.
ML Python
metrics = {cls: {'precision': precision_score(y_true, y_pred, pos_label=cls), 'recall': [1](y_true, y_pred, pos_label=[2])} for cls in classes} Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using precision_score twice
Passing predicted labels as pos_label
✗ Incorrect
Use recall_score for recall and cls as the positive label for each class.
5fill in blank
hardFill all three blanks to create a filtered dictionary of metrics where F1 score is above 0.7.
ML Python
filtered_metrics = {cls: metrics[cls] for cls in metrics if metrics[cls]['[1]'] [2] [3] Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using precision instead of f1_score
Using wrong comparison operator
Using string '0.7' instead of number
✗ Incorrect
Filter classes where the f1_score is greater than 0.7.