0
0
Prompt Engineering / GenAIml~20 mins

GPU infrastructure planning in Prompt Engineering / GenAI - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
GPU Infrastructure Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Understanding GPU Memory Requirements

You are planning GPU resources for training a deep learning model. The model requires 12GB of GPU memory per training batch. You want to train with a batch size of 64. How much total GPU memory is needed if you want to run the training on a single GPU without memory overflow?

A12 GB
B768 GB
C768 MB
D7680 GB
Attempts:
2 left
💡 Hint

Multiply the memory per batch by the batch size to get total memory needed.

Model Choice
intermediate
2:00remaining
Choosing GPUs for Parallel Training

You want to speed up training by using multiple GPUs in parallel. Which GPU setup is best for minimizing communication overhead between GPUs?

AGPUs connected over a standard Ethernet network
BMultiple GPUs connected via PCIe on the same motherboard
CGPUs in separate machines connected via Wi-Fi
DGPUs connected via USB hubs
Attempts:
2 left
💡 Hint

Consider the speed and latency of connections between GPUs.

Hyperparameter
advanced
2:00remaining
Adjusting Batch Size for GPU Memory Limits

You have a GPU with 24GB memory. Your model uses 8GB per batch of size 32. You want to increase batch size but cannot exceed GPU memory. What is the maximum batch size you can use?

A96
B64
C72
D48
Attempts:
2 left
💡 Hint

Calculate memory per sample from 8GB per 32 samples, then max batch size = 24GB / mem_per_sample.

Metrics
advanced
2:00remaining
Evaluating GPU Utilization Metrics

You monitor GPU utilization during training and see it averages 30%. What does this indicate about your GPU usage?

AGPU is fully utilized; training is optimal
BGPU is overheating; reduce workload
CGPU is underutilized; training could be faster with optimization
DGPU memory is full; reduce batch size
Attempts:
2 left
💡 Hint

Consider what low GPU utilization means for training speed.

🔧 Debug
expert
2:00remaining
Diagnosing Training Slowdown on Multi-GPU Setup

You set up training on 4 GPUs but notice training is slower than on a single GPU. Which is the most likely cause?

AHigh communication overhead between GPUs causing delays
BEach GPU has insufficient memory causing crashes
CThe model is too small to benefit from multiple GPUs
DThe GPUs are running at full utilization
Attempts:
2 left
💡 Hint

Think about what slows down multi-GPU training besides memory or utilization.