0
0
Prompt Engineering / GenAIml~20 mins

LLM scaling laws in Prompt Engineering / GenAI - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
LLM Scaling Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Understanding the relationship between model size and performance
Which statement best describes the general trend observed in LLM scaling laws regarding model size and performance?
APerformance improves as a power-law function of the number of parameters in the model.
BPerformance improves logarithmically with the number of parameters in the model.
CPerformance improves linearly with the number of parameters in the model.
DPerformance remains constant regardless of the number of parameters.
Attempts:
2 left
💡 Hint
Think about how small increases in size can lead to significant improvements, but not in a simple linear way.
Metrics
intermediate
2:00remaining
Evaluating loss behavior with increased compute
According to LLM scaling laws, how does the training loss typically change as the amount of compute used for training increases?
ATraining loss remains unchanged regardless of compute.
BTraining loss decreases following a power-law with increased compute.
CTraining loss decreases exponentially with increased compute.
DTraining loss increases as compute increases.
Attempts:
2 left
💡 Hint
Consider how more compute allows better fitting but with diminishing returns.
Model Choice
advanced
2:00remaining
Choosing model size for fixed compute budget
Given a fixed compute budget, which strategy aligns best with LLM scaling laws to minimize training loss?
ATrain multiple small models independently and average their outputs.
BTrain a smaller model for more training steps.
CTrain a very large model with fewer training steps.
DBalance model size and training steps to optimize compute usage.
Attempts:
2 left
💡 Hint
Think about how compute is split between model size and training duration.
🔧 Debug
advanced
2:00remaining
Identifying incorrect interpretation of scaling laws
Which of the following interpretations of LLM scaling laws is incorrect?
ALoss decreases smoothly as model size and compute increase.
BCompute-efficient training requires balancing model size and data.
CDoubling model parameters always halves the training loss.
DIncreasing dataset size improves performance up to a point.
Attempts:
2 left
💡 Hint
Consider if the relationship between parameters and loss is linear or not.
Predict Output
expert
2:00remaining
Predicting training loss from scaling law formula
Given the scaling law formula for training loss:
loss = a * (N)^-b + c where N is the number of parameters, a=10, b=1/3, and c=0.1. What is the training loss when N=1000000?
Prompt Engineering / GenAI
a = 10
b = 1/3
c = 0.1
N = 1000000
loss = a * (N)**(-b) + c
print(round(loss, 4))
A0.2
B0.4
C0.3
D0.5
Attempts:
2 left
💡 Hint
Calculate N to the power of -b first, then multiply by a and add c.