Recall & Review
beginner
What does CUDA stand for and why is it important in PyTorch?
CUDA stands for Compute Unified Device Architecture. It allows PyTorch to use NVIDIA GPUs to speed up computations, making training and running models much faster than using only the CPU.
Click to reveal answer
beginner
How do you check if CUDA is available in PyTorch?
You use the command
torch.cuda.is_available(). It returns True if a compatible GPU is ready to use, otherwise False.Click to reveal answer
intermediate
What does
torch.cuda.device_count() tell you?It tells you how many CUDA-capable GPUs are available on your system. This helps you know if you can use multiple GPUs for training.
Click to reveal answer
beginner
Why should you check CUDA availability before running a PyTorch model?
Because if CUDA is not available and you try to run your model on GPU, your code will fail. Checking first lets you decide to run on CPU or GPU safely.
Click to reveal answer
beginner
Write a simple PyTorch code snippet to print if CUDA is available and how many GPUs are detected.
import torch
if torch.cuda.is_available():
print(f"CUDA is available. Number of GPUs: {torch.cuda.device_count()}")
else:
print("CUDA is not available. Using CPU.")Click to reveal answer
Which PyTorch function checks if CUDA is available?
✗ Incorrect
The correct function to check CUDA availability is
torch.cuda.is_available().What does
torch.cuda.device_count() return?✗ Incorrect
torch.cuda.device_count() returns the number of CUDA-capable GPUs available.If
torch.cuda.is_available() returns False, what should you do?✗ Incorrect
If CUDA is not available, you should run your model on CPU to avoid errors.
Why is using CUDA beneficial for training models?
✗ Incorrect
CUDA allows PyTorch to use GPU power, which speeds up training significantly.
What will this code print if no GPU is available?
if torch.cuda.is_available():
print("GPU ready")
else:
print("Using CPU")
✗ Incorrect
If no GPU is available,
torch.cuda.is_available() returns False, so it prints "Using CPU".Explain how to check if your PyTorch code can use a GPU and why this check is important.
Think about a simple yes/no question your code asks before using GPU.
You got /4 concepts.
Describe what information
torch.cuda.device_count() provides and how it can help in training models.It tells you how many GPUs your computer has.
You got /3 concepts.