0
0
PyTorchml~20 mins

Dynamic computation graph advantage in PyTorch - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Dynamic Graph Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Why use dynamic computation graphs in PyTorch?

Which of the following best explains the main advantage of dynamic computation graphs in PyTorch compared to static graphs?

AThey allow changing the model structure during runtime, enabling flexible architectures like variable-length inputs.
BThey automatically optimize the graph for faster execution without user intervention.
CThey require less memory because the graph is precompiled once and reused for all inputs.
DThey prevent any runtime errors by checking all operations before training starts.
Attempts:
2 left
💡 Hint

Think about how dynamic graphs let you build the model step-by-step as data flows through.

Predict Output
intermediate
2:00remaining
Output of dynamic graph example in PyTorch

What will be the output of the following PyTorch code snippet?

PyTorch
import torch
x = torch.tensor([1.0, 2.0, 3.0], requires_grad=True)
y = x * 2 if x.sum() > 5 else x + 2
print(y)
Atensor([1., 2., 3.], requires_grad=True)
Btensor([3., 4., 5.], grad_fn=<AddBackward0>)
CRuntimeError due to conditional operation on tensor
Dtensor([2., 4., 6.], grad_fn=<MulBackward0>)
Attempts:
2 left
💡 Hint

Check the sum of x and which branch of the if-else runs.

Model Choice
advanced
2:00remaining
Choosing model type for variable-length input sequences

You want to build a model that processes sentences of different lengths without padding. Which model type benefits most from dynamic computation graphs?

AFeedforward Neural Network with fixed input dimension
BConvolutional Neural Network (CNN) with fixed input size
CRecurrent Neural Network (RNN) implemented in PyTorch
DSupport Vector Machine (SVM) with fixed feature vectors
Attempts:
2 left
💡 Hint

Think about models that naturally handle sequences of varying lengths.

Hyperparameter
advanced
2:00remaining
Effect of dynamic graph on training speed

How does using a dynamic computation graph in PyTorch typically affect training speed compared to static graphs?

ATraining is always faster because dynamic graphs optimize execution automatically.
BTraining is usually slower because the graph is rebuilt every iteration.
CTraining speed is unaffected because graphs are cached after first build.
DTraining speed depends only on hardware, not graph type.
Attempts:
2 left
💡 Hint

Consider the overhead of building the graph each time.

🔧 Debug
expert
2:00remaining
Identifying error in dynamic graph usage

What error will this PyTorch code raise when run?

import torch
x = torch.tensor([1.0, 2.0], requires_grad=True)
with torch.no_grad():
    y = x * 2
z = y.sum()
z.backward()
ARuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
BNo error, backward runs successfully
CTypeError: unsupported operand type(s) for *: 'Tensor' and 'int'
DValueError: expected scalar for backward()
Attempts:
2 left
💡 Hint

Think about what torch.no_grad() does to the computation graph.