0
0
MLOpsdevops~10 mins

Weights and Biases overview in MLOps - Step-by-Step Execution

Choose your learning style9 modes available
Process Flow - Weights and Biases overview
Start ML Project
Initialize W&B
Log Configurations
Train Model
Log Metrics & Artifacts
Visualize Results on W&B Dashboard
Iterate or Share Results
End
This flow shows how a machine learning project uses Weights and Biases to track experiments step-by-step.
Execution Sample
MLOps
import wandb
wandb.init(project="my-ml-project")
config = wandb.config
config.learning_rate = 0.01
for epoch in range(3):
    loss = 0.1 / (epoch + 1)
    wandb.log({"epoch": epoch, "loss": loss})
This code initializes W&B, sets a learning rate, and logs loss values for 3 epochs.
Process Table
StepActionVariable StateW&B Log EntryOutput/Result
1Import wandb and initialize projectwandb initialized, project='my-ml-project'NoneW&B run started
2Set config.learning_rateconfig.learning_rate=0.01NoneConfig saved
3Epoch 0: calculate lossepoch=0, loss=0.1Log {'epoch': 0, 'loss': 0.1}Loss logged
4Epoch 1: calculate lossepoch=1, loss=0.05Log {'epoch': 1, 'loss': 0.05}Loss logged
5Epoch 2: calculate lossepoch=2, loss=0.0333Log {'epoch': 2, 'loss': 0.0333}Loss logged
6End of loopepoch=3 (loop ends)NoneTraining complete, data on W&B
💡 Loop ends after 3 epochs, all losses logged to W&B
Status Tracker
VariableStartAfter Step 2After Step 3After Step 4After Step 5Final
epochundefinedundefined0123 (loop ends)
lossundefinedundefined0.10.050.0333undefined
config.learning_rateundefined0.010.010.010.010.01
Key Moments - 3 Insights
Why do we call wandb.log inside the loop?
Because each epoch produces new metrics (loss) to track, calling wandb.log inside the loop sends updated data to W&B after every epoch, as shown in steps 3-5.
What happens if we don't call wandb.init before logging?
Without wandb.init (step 1), W&B doesn't know which project or run to associate logs with, so logging metrics (steps 3-5) will fail or be ignored.
Is config.learning_rate updated during training?
No, config.learning_rate is set once before training (step 2) and remains constant during the loop, as seen in variable_tracker.
Visual Quiz - 3 Questions
Test your understanding
Look at the execution table, what is the loss value logged at step 4?
A0.1
B0.0333
C0.05
D0.0
💡 Hint
Check the 'W&B Log Entry' column at step 4 in the execution_table.
At which step does the training loop end according to the execution table?
AStep 6
BStep 3
CStep 5
DStep 2
💡 Hint
Look for the row mentioning 'End of loop' in the execution_table.
If we change config.learning_rate to 0.02 at step 2, what changes in variable_tracker?
Aconfig.learning_rate remains 0.01
Bconfig.learning_rate changes to 0.02 from step 2 onward
Cepoch values change
Dloss values become zero
💡 Hint
Refer to the 'config.learning_rate' row in variable_tracker and how it updates after step 2.
Concept Snapshot
Weights and Biases (W&B) helps track ML experiments.
Initialize with wandb.init(project).
Set config parameters before training.
Log metrics each epoch with wandb.log({metric:value}).
View results on W&B dashboard to compare runs.
Full Transcript
Weights and Biases is a tool to track machine learning experiments. First, you start a project with wandb.init. Then you set configuration parameters like learning rate. During training, you log metrics such as loss after each epoch using wandb.log. This data is sent to the W&B dashboard where you can see graphs and compare runs. The example code shows initializing W&B, setting learning rate, and logging loss for 3 epochs. The execution table traces each step, showing how variables change and when logs happen. Key points include calling wandb.log inside the loop to track progress, initializing W&B before logging, and that config values stay constant unless changed explicitly.