Complete the code to import the library used for monitoring model performance.
import [1]
The prometheus_client library is commonly used to collect and expose metrics for monitoring NLP models.
Complete the code to define a metric that counts prediction requests.
prediction_counter = [1]('prediction_requests_total', 'Total number of prediction requests')
A Counter is used to count events like prediction requests in monitoring.
Fix the error in the code to start the Prometheus metrics server on port 8000.
from prometheus_client import start_http_server start_http_server([1])
The start_http_server function expects an integer port number, so 8000 without quotes is correct.
Fill both blanks to create a dictionary comprehension that tracks average prediction latency per model.
avg_latency = {model: [1] for model, times in latency_data.items() if [2] > 0}The average latency is calculated by dividing the sum of times by the count of times. The condition ensures we only include models with recorded latencies.
Fill both blanks to create a dictionary comprehension that filters models with accuracy above 0.8 and maps model names to their accuracies.
high_accuracy = {: {BLANK_2}} for [2], [1] in accuracy_data.items() if [2] > 0.8The dictionary comprehension starts with {, uses model as key and accuracy as value, and filters accuracies above 0.8.