0
0
PyTorchml~3 mins

CUDA availability check in PyTorch

Choose your learning style9 modes available
Introduction

We check if CUDA is available to use the GPU for faster model training and predictions.

Before training a deep learning model to decide whether to use GPU or CPU.
When running code on different machines to adapt to available hardware.
To optimize performance by using GPU if available.
To avoid errors when trying to run GPU code on a machine without CUDA.
Syntax
PyTorch
import torch

cuda_available = torch.cuda.is_available()

torch.cuda.is_available() returns True if GPU with CUDA is ready to use.

Use this check to set device: device = torch.device('cuda' if cuda_available else 'cpu')

Examples
Prints True if CUDA GPU is available, otherwise False.
PyTorch
import torch
print(torch.cuda.is_available())
Sets device to cuda if available, else cpu, then prints it.
PyTorch
import torch

device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(device)
Sample Model

This program checks if CUDA is available and prints the GPU name if yes, otherwise says CPU is used.

PyTorch
import torch

def check_cuda():
    if torch.cuda.is_available():
        print('CUDA is available!')
        print(f'GPU device name: {torch.cuda.get_device_name(0)}')
    else:
        print('CUDA is not available. Using CPU.')

check_cuda()
OutputSuccess
Important Notes

Make sure you have installed the correct CUDA drivers and PyTorch version with CUDA support.

If CUDA is not available, your code will run on CPU, which is slower for large models.

Summary

Use torch.cuda.is_available() to check GPU availability.

Set device accordingly to run models on GPU or CPU.

This helps your code adapt to different hardware automatically.