Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to load the experiment run by its ID.
MLOps
run = client.get_run([1]) Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using experiment_id instead of run_id
Passing project_name which is not accepted by get_run
✗ Incorrect
The method get_run requires the run ID to fetch the specific experiment run.
2fill in blank
mediumComplete the code to compare two runs by their metrics.
MLOps
metrics_diff = run1.data.metrics[[1]] - run2.data.metrics[[1]]
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using run_id which is not a metric
Using start_time which is a timestamp, not a metric
✗ Incorrect
Metrics like 'accuracy' are stored in the metrics dictionary and can be compared between runs.
3fill in blank
hardFix the error in the code to fetch the latest run of an experiment.
MLOps
latest_run = client.search_runs(experiment_ids=[[1]], order_by=['start_time DESC'])[0]
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Passing run_id instead of experiment_id
Using project_id which is not accepted here
✗ Incorrect
The search_runs method requires experiment_ids to filter runs by experiment.
4fill in blank
hardFill both blanks to create a dictionary of metric differences between two runs.
MLOps
diffs = {metric: run1.data.metrics[[1]] - run2.data.metrics[[1]] for metric in run1.data.metrics if metric [2] run2.data.metrics} Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using '==' instead of 'in' to check dictionary keys
Using a wrong variable name instead of 'metric'
✗ Incorrect
We use 'metric' as the key to access values and check if the metric exists in both runs using 'in'.
5fill in blank
hardFill all three blanks to filter runs with metric improvement and create a summary dictionary.
MLOps
summary = {run.data.metrics[[1]]: run.data.metrics[[2]] for run in runs if run.data.metrics[[3]] > 0.8} Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using the same metric for all blanks without considering filtering condition
Using metrics that may not exist in all runs
✗ Incorrect
We use 'accuracy' as the key and 'loss' as the value in the summary, filtering runs where 'accuracy' is greater than 0.8.