0
0
PyTorchml~8 mins

Gradient access (.grad) in PyTorch - Model Metrics & Evaluation

Choose your learning style9 modes available
Metrics & Evaluation - Gradient access (.grad)
Which metric matters for Gradient Access (.grad) and WHY

When working with gradients in PyTorch, the key metric is the gradient values themselves. These values show how much each model parameter should change to reduce the error. Accessing .grad helps us understand if the model is learning properly. If gradients are zero or very small, the model might not learn well. If gradients are too large, the model might be unstable.

Confusion matrix or equivalent visualization

For gradient access, we don't use a confusion matrix. Instead, we look at the gradient tensor values. For example, after a backward pass, a parameter's gradient might look like this:

    tensor([[ 0.01, -0.02],
            [ 0.00,  0.03]])
    

This shows how much each parameter will update. Monitoring these values helps detect issues like vanishing or exploding gradients.

Precision vs Recall tradeoff analogy for gradients

Think of gradients like directions on a map. If the directions are too weak (small gradients), you might not move enough to reach your goal (slow learning). If directions are too strong (large gradients), you might overshoot or get lost (unstable learning). The tradeoff is to have gradients that are just right -- strong enough to learn but not too strong to cause problems.

What "good" vs "bad" gradient values look like
  • Good: Gradients have moderate values, not zero, not extremely large. They change smoothly during training.
  • Bad: Gradients are all zeros (no learning), or very large values (causing unstable updates), or NaN values (training breaks).
Common pitfalls when accessing gradients
  • Forgetting to call optimizer.zero_grad() before backward() causes gradients to accumulate unexpectedly.
  • Accessing .grad before backward() returns None because gradients are not computed yet.
  • Not detaching tensors properly can cause memory leaks when accessing gradients.
  • Ignoring gradient clipping can lead to exploding gradients and unstable training.
Self-check question

Your model's parameters have gradients that are all zeros after backward(). Is your model learning? Why or why not?

Answer: No, the model is not learning because zero gradients mean no updates will happen to the parameters. This could be due to a bug, such as missing loss calculation, or the model output not depending on the parameters.

Key Result
Gradient values accessed via .grad indicate if model parameters are updating correctly; zero or extreme values signal learning issues.