0
0
MLOpsdevops~30 mins

Comparing experiment runs in MLOps - Mini Project: Build & Apply

Choose your learning style9 modes available
Comparing Experiment Runs
📖 Scenario: You are working as a data scientist running machine learning experiments. Each experiment run produces results with metrics like accuracy and loss. You want to compare these runs to see which one performed best.
🎯 Goal: Build a simple Python script that stores experiment runs as dictionaries, sets a metric to compare, filters runs that meet a threshold, and prints the best run based on that metric.
📋 What You'll Learn
Create a dictionary called experiment_runs with three runs and their metrics
Create a variable called metric_to_compare set to the metric name to compare
Use a dictionary comprehension to filter runs with metric values above a threshold
Print the run with the highest value for the chosen metric
💡 Why This Matters
🌍 Real World
Data scientists often run many experiments and need to compare results quickly to choose the best model.
💼 Career
Knowing how to organize and compare experiment results is key for roles in machine learning operations (MLOps) and data science.
Progress0 / 4 steps
1
Create experiment runs data
Create a dictionary called experiment_runs with these exact entries: 'run1': {'accuracy': 0.82, 'loss': 0.35}, 'run2': {'accuracy': 0.88, 'loss': 0.30}, 'run3': {'accuracy': 0.79, 'loss': 0.40}
MLOps
Need a hint?

Use a dictionary with keys as run names and values as dictionaries of metrics.

2
Set the metric to compare
Create a variable called metric_to_compare and set it to the string 'accuracy'
MLOps
Need a hint?

Just assign the string 'accuracy' to the variable metric_to_compare.

3
Filter runs above threshold
Create a dictionary called filtered_runs using a dictionary comprehension that includes only runs from experiment_runs where the metric_to_compare value is greater than 0.80
MLOps
Need a hint?

Use a dictionary comprehension with for run, metrics in experiment_runs.items() and filter with if metrics[metric_to_compare] > 0.80.

4
Print the best run
Print the run name and its metrics from filtered_runs that has the highest value for metric_to_compare. Use max() with a key function to find the best run.
MLOps
Need a hint?

Use max(filtered_runs, key=lambda run: filtered_runs[run][metric_to_compare]) to find the best run, then print it.