Which PyTorch command correctly checks if a GPU is available for use?
Look for the official PyTorch method that returns a boolean indicating GPU availability.
The correct method to check GPU availability in PyTorch is torch.cuda.is_available(). Other options are not valid PyTorch commands.
What is the output of this PyTorch code snippet?
import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
tensor = torch.tensor([1, 2, 3]).to(device)
print(tensor.device)The tensor is moved to the first GPU if available, otherwise CPU.
If a GPU is available, the tensor's device will be cuda:0. Otherwise, it would be cpu. The code prints the device property which includes the device type and index.
You want to train a deep learning model using PyTorch on a machine with multiple GPUs. Which device assignment is best to ensure the model uses the first GPU?
Remember GPU indices start at 0. To use the first GPU explicitly, specify its index.
Using cuda:0 explicitly selects the first GPU. cuda defaults to cuda:0 but being explicit is better for clarity. cuda:1 selects the second GPU. cpu uses the processor only.
When training a model on GPU, increasing the batch size will most likely:
Think about how many data samples are processed at once and how that affects memory.
Increasing batch size means more data is processed simultaneously, which requires more GPU memory to store inputs, activations, and gradients.
Given this PyTorch training loop snippet, what is the most likely cause of a CUDA Out of Memory error?
for data, target in dataloader:
optimizer.zero_grad()
output = model(data.to(device))
loss = loss_fn(output, target.to(device))
loss.backward()
optimizer.step()Consider what happens if gradients keep adding up every iteration without reset.
If gradients are not cleared each iteration, they accumulate and consume more GPU memory, eventually causing an out of memory error. The code calls optimizer.zero_grad() correctly, so the error is likely from missing or misplaced gradient clearing in other code versions.