Complete the code to check if a GPU is available in PyTorch.
import torch is_gpu = torch.cuda.[1]()
The correct function to check GPU availability in PyTorch is torch.cuda.is_available() which returns True if a GPU is available.
Complete the code to move a tensor to GPU if available.
device = torch.device('cuda' if torch.cuda.[1]() else 'cpu') tensor = torch.tensor([1, 2, 3]).to(device)
To check if CUDA GPU is available, use torch.cuda.is_available(). This helps decide the device for tensor operations.
Fix the error in the code to print the name of the current GPU device.
if torch.cuda.is_available(): print(torch.cuda.get_device_[1]())
The correct function to get the GPU device name is torch.cuda.get_device_name().
Fill both blanks to create a tensor on GPU and print its device.
tensor = torch.tensor([1, 2, 3], device=[1]) print(tensor.[2])
To create a tensor on GPU, set device='cuda'. To print the device of a tensor, use tensor.device.
Fill all three blanks to move a model to GPU and print its device of first parameter.
model = MyModel() model.to([1]) first_param = next(model.parameters()) print(first_param.[2]) print(torch.cuda.[3]())
Move model to GPU with model.to('cuda'). The device of a parameter is accessed by param.device. Check GPU availability with torch.cuda.is_available().