0
0
Prompt Engineering / GenAIml~10 mins

GPU infrastructure planning in Prompt Engineering / GenAI - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to specify the number of GPUs for training.

Prompt Engineering / GenAI
trainer = Trainer(model=model, args=TrainingArguments(per_device_train_batch_size=16, [1]=4))
Drag options to blanks, or click blank then click option'
An_gpu
Bnum_gpus
Cgpu_count
Ddevice_count
Attempts:
3 left
💡 Hint
Common Mistakes
Using num_gpus or gpu_count which are not standard argument names.
2fill in blank
medium

Complete the code to move the model to the GPU device.

Prompt Engineering / GenAI
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = model.[1](device)
Drag options to blanks, or click blank then click option'
Acuda
Bto
Cmove
Ddevice
Attempts:
3 left
💡 Hint
Common Mistakes
Using cuda as a method, which is not correct syntax here.
3fill in blank
hard

Fix the error in the code to correctly check GPU availability.

Prompt Engineering / GenAI
if torch.cuda.[1]() > 0:
    print('GPUs are available')
Drag options to blanks, or click blank then click option'
Adevice_count
Bis_available
Cavailable
Dcount
Attempts:
3 left
💡 Hint
Common Mistakes
Using is_available which returns a boolean, not a count.
4fill in blank
hard

Fill both blanks to create a dictionary mapping GPU IDs to their memory usage in MB.

Prompt Engineering / GenAI
gpu_memory = {i: torch.cuda.memory_allocated(i) // (1024 * 1024) for i in {{BLANK_2}}(torch.cuda.device_count())}
Drag options to blanks, or click blank then click option'
Ai
Brange
Ctorch
Ddevice
Attempts:
3 left
💡 Hint
Common Mistakes
Using torch or device incorrectly in the loop.
5fill in blank
hard

Fill all three blanks to set up distributed training with the correct backend and initialize the process group.

Prompt Engineering / GenAI
import torch.distributed as dist

dist.init_process_group(backend=[1], init_method='env://', world_size=[2], rank=[3])
Drag options to blanks, or click blank then click option'
A'nccl'
Bworld_size
Crank
D'gloo'
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'gloo' backend for GPU training which is slower.