Complete the code to calculate the accuracy of a fine-tuned model's predictions.
accuracy = sum(predictions == [1]) / len(predictions)
Accuracy compares the model's predictions to the true labels to see how many are correct.
Complete the code to compute the loss of the fine-tuned model on test data.
loss = model.evaluate(test_inputs, [1])The model's loss is calculated by comparing predictions to the true test outputs (labels).
Fix the error in the code to generate predictions from the fine-tuned model.
predictions = model.[1](new_data)fit which trains the model instead of predicting.evaluate which returns loss and metrics.To get predictions, use the predict method on new data.
Fill both blanks to create a dictionary of accuracy and loss after evaluation.
results = {'accuracy': [1], 'loss': [2]Accuracy is computed by comparing true labels and predictions. Loss is the first value returned by model.evaluate.
Fill all three blanks to compute precision, recall, and F1 score for the fine-tuned model.
precision = precision_score(true_labels, [1]) recall = recall_score([2], predictions) f1 = f1_score(true_labels, [3])
Precision and F1 score compare true labels to predictions. Recall also compares true labels to predictions.