Challenge - 5 Problems
CUDA Mastery Badge
Get all challenges correct to earn this badge!
Test your skills under time pressure!
❓ Predict Output
intermediate1:30remaining
Check CUDA availability output
What is the output of this PyTorch code snippet that checks if CUDA is available?
PyTorch
import torch print(torch.cuda.is_available())
Attempts:
2 left
💡 Hint
This function returns a boolean indicating hardware availability.
✗ Incorrect
torch.cuda.is_available() returns True if PyTorch detects a CUDA-capable GPU and the CUDA drivers are properly installed; otherwise, it returns False.
🧠 Conceptual
intermediate1:30remaining
Understanding CUDA device count
Which PyTorch function correctly returns the number of CUDA devices available on the system?
Attempts:
2 left
💡 Hint
This function returns an integer count of GPUs.
✗ Incorrect
torch.cuda.device_count() returns the number of CUDA-capable devices detected by PyTorch.
🔧 Debug
advanced2:00remaining
Diagnose CUDA availability error
What error will this code raise if CUDA is not available on the system?
PyTorch
import torch print(torch.cuda.current_device())
Attempts:
2 left
💡 Hint
current_device() requires at least one CUDA device.
✗ Incorrect
If no CUDA device is available, torch.cuda.current_device() raises a RuntimeError indicating an invalid device ordinal.
❓ Model Choice
advanced2:00remaining
Choosing device for model training
Given this code snippet, which device will the model be moved to if CUDA is available?
PyTorch
import torch model = torch.nn.Linear(10, 2) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model.to(device) print(next(model.parameters()).device)
Attempts:
2 left
💡 Hint
The device string 'cuda' defaults to the first GPU.
✗ Incorrect
If CUDA is available, 'cuda' refers to the first GPU device (cuda:0), so the model parameters move there.
❓ Metrics
expert2:30remaining
Interpreting CUDA memory usage
What does the following PyTorch code output represent?
PyTorch
import torch print(torch.cuda.memory_allocated())
Attempts:
2 left
💡 Hint
This function reports memory used by tensors, not total or peak memory.
✗ Incorrect
torch.cuda.memory_allocated() returns the current number of bytes allocated by tensors on the default CUDA device.