0
0
PyTorchml~20 mins

First PyTorch computation - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
PyTorch Computation Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
2:00remaining
What is the output of this PyTorch tensor operation?
Consider the following PyTorch code that creates two tensors and adds them. What is the output tensor?
PyTorch
import torch
x = torch.tensor([1, 2, 3])
y = torch.tensor([4, 5, 6])
z = x + y
print(z)
Atensor([5, 7, 9])
Btensor([4, 7, 9])
Ctensor([1, 2, 3, 4, 5, 6])
Dtensor([3, 3, 3])
Attempts:
2 left
💡 Hint
Remember that adding two tensors of the same shape adds their elements one by one.
Model Choice
intermediate
2:00remaining
Choosing the right PyTorch tensor for GPU computation
You want to perform fast matrix multiplication on a GPU using PyTorch. Which tensor creation method ensures the tensor is on the GPU?
Atorch.tensor([[1, 2], [3, 4]], dtype=torch.float64)
Btorch.tensor([[1, 2], [3, 4]])
Ctorch.tensor([[1, 2], [3, 4]], device='cuda')
Dtorch.tensor([[1, 2], [3, 4]], requires_grad=True)
Attempts:
2 left
💡 Hint
To use GPU, the tensor must be created or moved to the CUDA device.
Metrics
advanced
2:00remaining
Interpreting training loss and accuracy in PyTorch
During training a classification model in PyTorch, you observe the following after one epoch: training loss = 0.8, training accuracy = 0.65. What does this mean?
AThe model is 80% accurate and 65% loss means it is underfitting.
BThe model is making correct predictions 65% of the time, and the loss value indicates how far predictions are from true labels.
CLoss and accuracy are unrelated; only accuracy matters for model quality.
DA loss of 0.8 means the model predictions are perfect, and accuracy of 0.65 is low.
Attempts:
2 left
💡 Hint
Loss measures prediction error; accuracy measures correct predictions percentage.
🔧 Debug
advanced
2:00remaining
Why does this PyTorch code raise an error?
What error does this PyTorch code raise and why? import torch x = torch.tensor([1, 2, 3]) y = torch.tensor([4, 5]) z = x + y
PyTorch
import torch
x = torch.tensor([1, 2, 3])
y = torch.tensor([4, 5])
z = x + y
ARuntimeError: The tensors have different shapes and cannot be broadcast for addition.
BSyntaxError: Missing parentheses in print statement.
CTypeError: Unsupported operand types for +: 'Tensor' and 'Tensor'.
DNo error; output is tensor([5, 7, 3]).
Attempts:
2 left
💡 Hint
Check if the two tensors have the same shape or compatible shapes for addition.
🧠 Conceptual
expert
3:00remaining
Understanding autograd in PyTorch
Which statement best describes how PyTorch's autograd system works during backpropagation?
AAutograd requires the user to write explicit derivative functions for each operation.
BAutograd manually updates weights during training without needing gradients.
CAutograd only works with CPU tensors and ignores GPU tensors.
DAutograd records operations on tensors with requires_grad=True to build a computation graph for automatic differentiation.
Attempts:
2 left
💡 Hint
Think about how PyTorch tracks operations to compute gradients automatically.