A dynamic computation graph lets the model change its steps while running. This helps when data or tasks are different each time.
Dynamic computation graph advantage in PyTorch
import torch # Define a simple dynamic graph example def dynamic_graph_example(x): if x.sum() > 0: y = x * 2 else: y = x - 2 return y input_tensor = torch.tensor([1.0, -1.0, 2.0], requires_grad=True) output = dynamic_graph_example(input_tensor) output.backward(torch.ones_like(output)) print(output) print(input_tensor.grad)
Dynamic graphs are created on the fly during the forward pass.
PyTorch builds the graph as you run operations, so you can use normal Python control flow.
import torch def dynamic_graph_example(x): if x.sum() > 0: y = x * 2 else: y = x - 2 return y input_tensor = torch.tensor([1.0, -1.0, 2.0], requires_grad=True) output = dynamic_graph_example(input_tensor) print(output)
import torch def dynamic_graph_example(x): if x.sum() > 10: y = x * 3 else: y = x / 2 return y input_tensor = torch.tensor([2.0, 3.0, 4.0], requires_grad=True) output = dynamic_graph_example(input_tensor) print(output)
This program shows how the dynamic graph changes for two inputs: one with positive sum and one with negative sum. It prints outputs and gradients to see the effect.
import torch def dynamic_graph_example(x): if x.sum() > 0: y = x * 2 else: y = x - 2 return y # Create input tensors input_positive = torch.tensor([1.0, -1.0, 2.0], requires_grad=True) input_negative = torch.tensor([-3.0, -2.0, -1.0], requires_grad=True) print('Before backward:') print('input_positive:', input_positive) print('input_negative:', input_negative) # Forward pass output_positive = dynamic_graph_example(input_positive) output_negative = dynamic_graph_example(input_negative) print('\nOutputs:') print('output_positive:', output_positive) print('output_negative:', output_negative) # Backward pass output_positive.sum().backward() output_negative.sum().backward() print('\nGradients after backward:') print('input_positive.grad:', input_positive.grad) print('input_negative.grad:', input_negative.grad)
Dynamic graphs let you use normal Python code like if-else and loops inside model definitions.
They make debugging easier because errors show up immediately during the forward pass.
Dynamic graphs may be slower than static graphs for some tasks but are more flexible.
Dynamic computation graphs build the model step-by-step as data flows through.
This allows models to change behavior based on input or conditions.
PyTorch uses dynamic graphs, making it easy to write flexible and debuggable models.