Complete the code to calculate the accuracy of an agent's predictions.
accuracy = correct_predictions / [1]Accuracy is the ratio of correct predictions to the total number of predictions made.
Complete the code to compute the relevance score using cosine similarity.
relevance_score = cosine_similarity(agent_vector, [1])Relevance is measured by comparing the agent's output vector to the query vector using cosine similarity.
Fix the error in the code that calculates F1 score from precision and recall.
f1_score = 2 * (precision * recall) / [1]
The F1 score formula divides twice the product of precision and recall by their sum.
Fill both blanks to create a dictionary of agent accuracies for agents with accuracy above 0.8.
accuracies = {agent: [1] for agent, [2] in results.items() if accuracy > 0.8}The dictionary comprehension extracts accuracy for each agent and filters those with accuracy above 0.8.
Fill all three blanks to filter agents with relevance above 0.7 and create a summary dictionary.
summary = {agent: [1] for agent, [2] in agent_results.items() if [3] > 0.7}The dictionary comprehension extracts relevance scores for agents and filters those with relevance above 0.7.