0
0
PyTorchml~20 mins

Broadcasting in PyTorch - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Broadcasting Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
What is the output shape after broadcasting?
Given two tensors a and b in PyTorch, what is the shape of the result after adding them?
PyTorch
import torch

a = torch.randn(3, 1, 5)
b = torch.randn(1, 4, 1)
result = a + b
output_shape = result.shape
print(output_shape)
Atorch.Size([3, 4, 5])
Btorch.Size([3, 4, 1])
Ctorch.Size([1, 4, 1])
Dtorch.Size([3, 1, 5])
Attempts:
2 left
💡 Hint
Remember PyTorch broadcasts dimensions by expanding size 1 dimensions to match the other tensor.
Model Choice
intermediate
2:00remaining
Which operation uses broadcasting correctly?
Which PyTorch operation below correctly uses broadcasting to multiply a tensor of shape (2,3) with a tensor of shape (3,)?
A
x = torch.randn(2,3)
y = torch.randn(2)
result = x * y
B
x = torch.randn(2,3)
y = torch.randn(3)
result = x * y
C
x = torch.randn(2,3)
y = torch.randn(1,3)
result = x + y
D
x = torch.randn(2,3)
y = torch.randn(3,2)
result = x * y
Attempts:
2 left
💡 Hint
Broadcasting works when trailing dimensions match or are 1.
Hyperparameter
advanced
2:00remaining
How does broadcasting affect batch size in model input?
You have a model expecting input shape (batch_size, features). You provide input tensor of shape (features,) without batch dimension. What happens when you add a batch dimension using broadcasting?
ABroadcasting duplicates features along batch dimension, increasing memory usage exponentially.
BThe input is broadcasted to shape (batch_size, features) automatically during model forward pass.
CYou must manually add batch dimension; broadcasting does not add batch size dimension.
DThe model raises an error because broadcasting cannot add missing dimensions.
Attempts:
2 left
💡 Hint
Broadcasting can expand size 1 dimensions but cannot add missing dimensions.
🔧 Debug
advanced
2:00remaining
Why does this broadcasting operation raise an error?
Consider the code below. Why does it raise a runtime error?
PyTorch
import torch
x = torch.randn(4, 3)
y = torch.randn(2, 3)
z = x + y
AShapes (4,3) and (2,3) are incompatible for broadcasting because the first dimensions differ and neither is 1.
BThe tensors must have the same number of elements, which they do not.
CPyTorch does not support broadcasting for addition operations.
DThe tensors have different data types causing the addition to fail.
Attempts:
2 left
💡 Hint
Check the rules for broadcasting dimensions from left to right.
🧠 Conceptual
expert
2:00remaining
What is the effect of broadcasting on memory usage?
When PyTorch broadcasts a tensor during an operation, what happens to the underlying memory usage?
ABroadcasting duplicates the tensor data on the GPU to speed up computation, increasing memory usage.
BBroadcasting compresses the tensor to save memory during operations.
CBroadcasting creates a new large tensor by copying data to match the broadcasted shape, increasing memory usage.
DBroadcasting creates a view that virtually expands the tensor without copying data, so memory usage stays low.
Attempts:
2 left
💡 Hint
Think about how broadcasting avoids unnecessary data duplication.