How to Fix CUDA Not Available Error in PyTorch
The
cuda not available error in PyTorch happens when your system or PyTorch can't detect a compatible GPU. To fix it, ensure your GPU drivers and CUDA toolkit are installed correctly, and use torch.cuda.is_available() to check before moving tensors or models to CUDA.Why This Happens
This error occurs because PyTorch cannot find a GPU device with CUDA support on your system. It can happen if CUDA drivers are missing, the GPU is not compatible, or the code tries to use CUDA without checking availability first.
python
import torch device = torch.device('cuda') tensor = torch.tensor([1, 2, 3]).to(device)
Output
RuntimeError: CUDA error: device-side assert triggered or RuntimeError: CUDA device not found
The Fix
First, verify your GPU and CUDA installation. Then, modify your code to check if CUDA is available before using it. This prevents errors on machines without CUDA.
python
import torch if torch.cuda.is_available(): device = torch.device('cuda') else: device = torch.device('cpu') tensor = torch.tensor([1, 2, 3]).to(device) print(f"Using device: {device}")
Output
Using device: cuda
# or
Using device: cpu
Prevention
Always check torch.cuda.is_available() before using CUDA in your code. Keep your GPU drivers and CUDA toolkit updated. Use virtual environments to manage PyTorch versions compatible with your CUDA version.
Related Errors
Other common errors include:
- RuntimeError: CUDA out of memory - fix by reducing batch size or freeing GPU memory.
- AssertionError: Torch not compiled with CUDA enabled - fix by installing a CUDA-enabled PyTorch version.
Key Takeaways
Always check torch.cuda.is_available() before using CUDA in PyTorch.
Ensure your GPU drivers and CUDA toolkit are properly installed and compatible.
Use device fallback to CPU to avoid runtime errors on machines without CUDA.
Keep PyTorch and CUDA versions aligned for smooth GPU usage.
Monitor GPU memory to prevent out-of-memory errors.