0
0
PyTorchml~20 mins

CUDA availability check in PyTorch - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
CUDA Mastery Badge
Get all challenges correct to earn this badge!
Test your skills under time pressure!
Predict Output
intermediate
1:30remaining
Check CUDA availability output
What is the output of this PyTorch code snippet that checks if CUDA is available?
PyTorch
import torch
print(torch.cuda.is_available())
ATrue if a CUDA-enabled GPU is available, otherwise False
BThe number of CUDA devices available as an integer
CRaises a RuntimeError if CUDA is not installed
DAlways prints False regardless of hardware
Attempts:
2 left
💡 Hint
This function returns a boolean indicating hardware availability.
🧠 Conceptual
intermediate
1:30remaining
Understanding CUDA device count
Which PyTorch function correctly returns the number of CUDA devices available on the system?
Atorch.cuda.current_device()
Btorch.cuda.is_available()
Ctorch.cuda.get_device_name()
Dtorch.cuda.device_count()
Attempts:
2 left
💡 Hint
This function returns an integer count of GPUs.
🔧 Debug
advanced
2:00remaining
Diagnose CUDA availability error
What error will this code raise if CUDA is not available on the system?
PyTorch
import torch
print(torch.cuda.current_device())
AIndexError: list index out of range
BRuntimeError: CUDA error: invalid device ordinal
CAssertionError: CUDA not available
DNo error, prints 0
Attempts:
2 left
💡 Hint
current_device() requires at least one CUDA device.
Model Choice
advanced
2:00remaining
Choosing device for model training
Given this code snippet, which device will the model be moved to if CUDA is available?
PyTorch
import torch
model = torch.nn.Linear(10, 2)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model.to(device)
print(next(model.parameters()).device)
Acuda:0
Bcpu
Ccuda:1
DRaises an error
Attempts:
2 left
💡 Hint
The device string 'cuda' defaults to the first GPU.
Metrics
expert
2:30remaining
Interpreting CUDA memory usage
What does the following PyTorch code output represent?
PyTorch
import torch
print(torch.cuda.memory_allocated())
AThe peak memory usage during the program run
BThe total GPU memory available on the system in bytes
CThe number of bytes currently allocated by tensors on the default CUDA device
DThe number of CUDA devices available
Attempts:
2 left
💡 Hint
This function reports memory used by tensors, not total or peak memory.