Using a GPU makes tensor calculations much faster. Moving tensors to the GPU lets your program use this speed.
0
0
GPU tensors (to, cuda) in PyTorch
Introduction
When training a deep learning model to speed up calculations.
When working with large datasets that need fast processing.
When running neural networks that require heavy matrix operations.
When you want to use PyTorch's GPU support for better performance.
Syntax
PyTorch
tensor.to(device)
tensor.cuda(device=None)to(device) moves a tensor to the specified device like 'cpu' or 'cuda'.
cuda() moves a tensor to the GPU. You can specify which GPU if you have more than one.
Examples
This moves the tensor
t to the default GPU.PyTorch
import torch t = torch.tensor([1, 2, 3]) t_gpu = t.to('cuda')
This also moves the tensor
t to the default GPU using cuda().PyTorch
import torch t = torch.tensor([1, 2, 3]) t_gpu = t.cuda()
This moves the tensor to GPU number 0 explicitly.
PyTorch
import torch t = torch.tensor([1, 2, 3]) t_gpu = t.to('cuda:0')
Sample Model
This program checks if a GPU is available. It creates a tensor on the CPU, moves it to the GPU if possible, and then squares the tensor on that device.
PyTorch
import torch # Check if GPU is available if torch.cuda.is_available(): device = torch.device('cuda') else: device = torch.device('cpu') # Create a tensor on CPU t = torch.tensor([1.0, 2.0, 3.0]) print(f'Original tensor device: {t.device}') # Move tensor to GPU if available t_gpu = t.to(device) print(f'Tensor device after moving: {t_gpu.device}') # Perform a simple operation on GPU t_gpu_squared = t_gpu ** 2 print(f'Tensor squared on device: {t_gpu_squared}')
OutputSuccess
Important Notes
Always check if a GPU is available using torch.cuda.is_available() before moving tensors.
Moving tensors between CPU and GPU takes time, so do it only when needed.
Operations on tensors happen on the device where the tensor is located.
Summary
Moving tensors to GPU speeds up calculations.
Use to('cuda') or cuda() to move tensors to GPU.
Check GPU availability before moving tensors.