0
0
PyTorchml~20 mins

Installation and GPU setup in PyTorch - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
GPU Setup Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Understanding GPU Availability in PyTorch

Which PyTorch command correctly checks if a GPU is available for use?

Atorch.cuda.is_available()
Btorch.gpu.is_ready()
Ctorch.device('gpu').is_available()
Dtorch.cuda.check_gpu()
Attempts:
2 left
💡 Hint

Look for the official PyTorch method that returns a boolean indicating GPU availability.

Predict Output
intermediate
2:00remaining
Output of Device Assignment Code

What is the output of this PyTorch code snippet?

import torch
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
tensor = torch.tensor([1, 2, 3]).to(device)
print(tensor.device)
Acpu
Bcuda:0
Ccuda
Dcuda:1
Attempts:
2 left
💡 Hint

The tensor is moved to the first GPU if available, otherwise CPU.

Model Choice
advanced
2:00remaining
Choosing the Correct Device for Model Training

You want to train a deep learning model using PyTorch on a machine with multiple GPUs. Which device assignment is best to ensure the model uses the first GPU?

Adevice = torch.device('cuda')
Bdevice = torch.device('cpu')
Cdevice = torch.device('cuda:1')
Ddevice = torch.device('cuda:0')
Attempts:
2 left
💡 Hint

Remember GPU indices start at 0. To use the first GPU explicitly, specify its index.

Hyperparameter
advanced
2:00remaining
Effect of Batch Size on GPU Memory Usage

When training a model on GPU, increasing the batch size will most likely:

ACause the GPU to switch to CPU automatically
BHave no effect on GPU memory usage
CIncrease GPU memory usage
DDecrease GPU memory usage
Attempts:
2 left
💡 Hint

Think about how many data samples are processed at once and how that affects memory.

🔧 Debug
expert
3:00remaining
Identifying the Cause of CUDA Out of Memory Error

Given this PyTorch training loop snippet, what is the most likely cause of a CUDA Out of Memory error?

for data, target in dataloader:
    optimizer.zero_grad()
    output = model(data.to(device))
    loss = loss_fn(output, target.to(device))
    loss.backward()
    optimizer.step()
AAccumulating gradients without clearing them each iteration
BNot moving data and target tensors to the GPU device
CNot calling <code>optimizer.zero_grad()</code> before backward pass
DUsing <code>loss.backward()</code> instead of <code>loss.forward()</code>
Attempts:
2 left
💡 Hint

Consider what happens if gradients keep adding up every iteration without reset.