0
0
PyTorchml~10 mins

CUDA availability check in PyTorch - ML Experiment: Train & Evaluate

Choose your learning style9 modes available
Experiment - CUDA availability check
Problem:You want to check if your computer can use the GPU with CUDA to speed up PyTorch computations.
Current Metrics:N/A - This is a hardware/software environment check, not a model training task.
Issue:You are unsure if CUDA is available and properly configured for PyTorch on your system.
Your Task
Write a simple PyTorch script to check if CUDA is available and print the result clearly.
Use only PyTorch library functions.
Do not attempt to train any model.
Output must be clear and user-friendly.
Hint 1
Hint 2
Hint 3
Solution
PyTorch
import torch

def check_cuda():
    if torch.cuda.is_available():
        num_gpus = torch.cuda.device_count()
        print(f"CUDA is available! Number of GPUs detected: {num_gpus}")
        for i in range(num_gpus):
            print(f"GPU {i}: {torch.cuda.get_device_name(i)}")
    else:
        print("CUDA is not available on this system.")

if __name__ == "__main__":
    check_cuda()
Added a function to check CUDA availability using torch.cuda.is_available().
Printed the number of GPUs detected with torch.cuda.device_count().
Printed the name of each GPU detected with torch.cuda.get_device_name(i).
Added a clear message if CUDA is not available.
Results Interpretation
Before: User did not know if CUDA was available. After: User runs the script and sees a clear message about CUDA availability and GPU details.
Checking CUDA availability is a simple but important first step to ensure your PyTorch code can run on GPU hardware for faster training and inference.
Bonus Experiment
Modify the script to create a tensor on the GPU if CUDA is available, and on CPU otherwise, then print the device of the tensor.
💡 Hint
Use torch.device('cuda') or torch.device('cpu') and create a tensor with .to(device). Then print tensor.device.