0
0
Prompt Engineering / GenAIml~20 mins

Why architecture choices affect scalability in Prompt Engineering / GenAI - Challenge Your Understanding

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Scalability Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
How does model architecture impact training speed?

Imagine you have two models: one with many layers and one with fewer layers. Which statement best explains how the number of layers affects training speed?

AMore layers usually mean slower training because there are more calculations to do.
BMore layers always make training faster because the model learns better.
CThe number of layers does not affect training speed at all.
DFewer layers always cause the model to crash during training.
Attempts:
2 left
💡 Hint

Think about how adding more steps in a recipe takes more time.

Model Choice
intermediate
2:00remaining
Choosing architecture for large datasets

You have a very large dataset and limited computing power. Which model architecture choice helps scalability best?

AA model that requires loading the entire dataset into memory.
BA very deep neural network with millions of parameters.
CA simple model with fewer layers and parameters.
DA model that uses all available memory at once.
Attempts:
2 left
💡 Hint

Think about what works well when your computer has limited memory.

Hyperparameter
advanced
2:00remaining
Effect of batch size on scalability

Consider training a model with different batch sizes. Which batch size choice best supports scalability on limited GPU memory?

ABatch size equal to the entire dataset size.
BVery small batch size that fits comfortably in GPU memory.
CBatch size does not affect GPU memory usage.
DVery large batch size that uses all GPU memory at once.
Attempts:
2 left
💡 Hint

Think about how much data you can hold in your hands at once.

Metrics
advanced
2:00remaining
Measuring scalability with training time

You train two models on the same dataset. Model A takes 2 hours, Model B takes 6 hours. Both have similar accuracy. What does this say about their scalability?

AModel B is more scalable because it takes longer to train.
BTraining time does not relate to scalability.
CBoth models have the same scalability because accuracy is similar.
DModel A is more scalable because it trains faster with similar accuracy.
Attempts:
2 left
💡 Hint

Think about which model can handle bigger data faster.

🔧 Debug
expert
3:00remaining
Identifying architecture bottleneck in scalability

Given a model that slows down drastically when data size doubles, which architectural choice is most likely causing the bottleneck?

AUsing a fully connected layer with very large input size causing many computations.
BUsing convolutional layers with small filters and stride 1.
CUsing batch normalization layers after each activation.
DUsing dropout layers to reduce overfitting.
Attempts:
2 left
💡 Hint

Think about which layer type grows computation most with input size.