0
0
NLPml~20 mins

GPT family overview in NLP - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
GPT Family Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
1:30remaining
Understanding GPT Model Sizes
Which GPT model in the family is known for having the largest number of parameters?
AGPT-1
BGPT-3
CGPT-2
DGPT-Neo
Attempts:
2 left
💡 Hint
Think about the model that marked a big jump in size and capability.
🧠 Conceptual
intermediate
1:30remaining
Training Data Differences
What is a key difference in the training data between GPT-2 and GPT-3?
AGPT-3 was trained only on code, GPT-2 on natural language
BGPT-2 used only books, GPT-3 used only websites
CGPT-3 was trained on a much larger and more diverse dataset than GPT-2
DGPT-2 used supervised learning, GPT-3 used reinforcement learning
Attempts:
2 left
💡 Hint
Consider the scale and variety of data used.
Model Choice
advanced
2:00remaining
Choosing GPT Model for Few-Shot Learning
Which GPT model is best suited for few-shot learning tasks without fine-tuning?
AGPT-3
BGPT-1
CGPT-2
DGPT-2 Small
Attempts:
2 left
💡 Hint
Look for the model known for strong few-shot capabilities.
Metrics
advanced
1:30remaining
Evaluating GPT Model Performance
Which metric is commonly used to evaluate the language modeling performance of GPT models?
APerplexity
BMean Squared Error
CF1 Score
DAccuracy
Attempts:
2 left
💡 Hint
This metric measures how well a model predicts the next word.
🔧 Debug
expert
2:30remaining
Identifying Cause of GPT Model Output Bias
A GPT model consistently generates biased or inappropriate text. What is the most likely cause?
AThe model architecture is too small
BThe model was trained for too many epochs
CThe optimizer used was incorrect
DThe training data contains biased or unfiltered content
Attempts:
2 left
💡 Hint
Think about what influences the model's learned behavior most.