0
0
MLOpsdevops~10 mins

Comparing experiment runs in MLOps - Step-by-Step Execution

Choose your learning style9 modes available
Process Flow - Comparing experiment runs
Start: Select experiments
Fetch run data
Extract metrics and parameters
Align runs for comparison
Visualize differences
Analyze results and decide
This flow shows how experiment runs are selected, their data fetched and aligned, then compared visually to analyze differences.
Execution Sample
MLOps
runs = fetch_runs(['run1', 'run2'])
metrics = extract_metrics(runs)
compare(metrics)
visualize_comparison()
This code fetches two experiment runs, extracts their metrics, compares them, and visualizes the differences.
Process Table
StepActionInputOutputNotes
1Select experiment runs['run1', 'run2']Runs selectedUser chooses runs to compare
2Fetch run dataRuns selectedRaw data for run1 and run2Data includes params, metrics, tags
3Extract metricsRaw dataMetrics dict for each runFocus on key performance metrics
4Align runsMetrics dictsAligned metrics tableEnsures metrics correspond across runs
5Visualize comparisonAligned metricsComparison chart/tableShows differences clearly
6Analyze resultsComparison chartInsights on performanceUser decides best run
7ExitN/AComparison completeProcess ends
💡 All selected runs compared and visualized for analysis
Status Tracker
VariableStartAfter Step 2After Step 3After Step 4Final
runs[]['run1', 'run2']['run1', 'run2']['run1', 'run2']['run1', 'run2']
raw_data{}{'run1': {...}, 'run2': {...}}{'run1': {...}, 'run2': {...}}{'run1': {...}, 'run2': {...}}{'run1': {...}, 'run2': {...}}
metrics{}{}{'run1': {'accuracy': 0.9, 'loss': 0.1}, 'run2': {'accuracy': 0.85, 'loss': 0.15}}{'accuracy': [0.9, 0.85], 'loss': [0.1, 0.15]}{'accuracy': [0.9, 0.85], 'loss': [0.1, 0.15]}
comparison_chartnullnullnullnullRendered chart/table
Key Moments - 3 Insights
Why do we need to align metrics before comparing runs?
Because different runs might have different sets of metrics or order. Aligning ensures we compare the same metrics side by side, as shown in step 4 of the execution_table.
What happens if a metric is missing in one run?
The alignment step handles missing metrics by marking them as absent or null, so the comparison chart can show gaps or differences clearly, preventing misleading results.
Why is visualization important after comparison?
Visualization helps quickly spot differences and trends between runs, making it easier to analyze results and decide which run performed better, as seen in step 5.
Visual Quiz - 3 Questions
Test your understanding
Look at the execution_table, at which step are the metrics aligned for comparison?
AStep 5
BStep 3
CStep 4
DStep 2
💡 Hint
Check the 'Action' column in execution_table row with 'Align runs'
According to variable_tracker, what is the value of 'metrics' after Step 3?
AEmpty dictionary {}
B{'run1': {'accuracy': 0.9, 'loss': 0.1}, 'run2': {'accuracy': 0.85, 'loss': 0.15}}
CRaw data for runs
DRendered chart/table
💡 Hint
Look at the 'metrics' row under 'After Step 3' column in variable_tracker
If a new run 'run3' is added, which step would need to be repeated to include it in the comparison?
AAll steps from 1 to 5
BSteps 1 and 2
CStep 1 only
DSteps 3 and 4
💡 Hint
Adding a run affects selection, fetching data, extracting metrics, aligning, and visualization
Concept Snapshot
Comparing experiment runs:
1. Select runs to compare
2. Fetch their data (params, metrics)
3. Extract key metrics
4. Align metrics across runs
5. Visualize differences
6. Analyze to choose best run
Full Transcript
This visual execution shows how to compare experiment runs step-by-step. First, runs are selected. Then their raw data is fetched, including parameters and metrics. Next, key metrics are extracted from each run. These metrics are aligned so that the same metrics from different runs are side by side. After alignment, a visualization like a chart or table is created to show differences clearly. Finally, the user analyzes the visualization to decide which run performed better. Variables like 'runs', 'raw_data', and 'metrics' change as the process moves forward. Key moments include understanding why alignment is needed, handling missing metrics, and the importance of visualization. The quiz tests understanding of steps and variable states.