How to Move Tensor to GPU in PyTorch: Simple Guide
In PyTorch, you can move a tensor to the GPU by calling
tensor.to('cuda') or tensor.cuda(). This transfers the tensor from CPU memory to GPU memory, enabling faster computations on supported devices.Syntax
To move a tensor to GPU, use either tensor.to('cuda') or tensor.cuda(). Both methods transfer the tensor from CPU to GPU memory.
tensor.to('cuda'): Moves tensor to the default GPU device.tensor.cuda(): Also moves tensor to GPU, commonly used.- You can specify device like
tensor.to('cuda:0')for the first GPU.
python
tensor_gpu = tensor.to('cuda')
tensor_gpu = tensor.cuda()Example
This example creates a tensor on CPU, moves it to GPU, and prints the device of the tensor before and after moving.
python
import torch # Create a tensor on CPU tensor_cpu = torch.tensor([1, 2, 3]) print('Before moving:', tensor_cpu.device) # Move tensor to GPU if available if torch.cuda.is_available(): tensor_gpu = tensor_cpu.to('cuda') print('After moving:', tensor_gpu.device) else: print('GPU not available, tensor remains on:', tensor_cpu.device)
Output
Before moving: cpu
After moving: cuda:0
Common Pitfalls
Common mistakes when moving tensors to GPU include:
- Trying to move tensor to GPU when no GPU is available, causing errors.
- Mixing tensors on CPU and GPU in operations, which causes runtime errors.
- Not moving model parameters to GPU along with tensors.
Always check if GPU is available with torch.cuda.is_available() before moving tensors.
python
import torch tensor_cpu = torch.tensor([1, 2, 3]) # Wrong: moving tensor without checking GPU availability # tensor_gpu = tensor_cpu.to('cuda') # May cause error if no GPU # Right: check first if torch.cuda.is_available(): tensor_gpu = tensor_cpu.to('cuda') else: tensor_gpu = tensor_cpu # Keep on CPU print(tensor_gpu.device)
Output
cpu
Quick Reference
Summary tips for moving tensors to GPU in PyTorch:
- Use
tensor.to('cuda')ortensor.cuda()to move tensors. - Always check GPU availability with
torch.cuda.is_available(). - Keep tensors and models on the same device to avoid errors.
- You can specify GPU device like
cuda:0,cuda:1, etc.
Key Takeaways
Use tensor.to('cuda') or tensor.cuda() to move tensors to GPU in PyTorch.
Always check if GPU is available with torch.cuda.is_available() before moving tensors.
Keep all tensors and models on the same device to avoid runtime errors.
Specify GPU device explicitly if using multiple GPUs, e.g., 'cuda:0'.
Moving tensors to GPU enables faster computation on supported hardware.