0
0
Prompt Engineering / GenAIml~20 mins

Why LLMs understand and generate text in Prompt Engineering / GenAI - Challenge Your Understanding

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
LLM Text Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
How do Large Language Models (LLMs) learn language patterns?

LLMs are trained on huge amounts of text data. What is the main way they learn to understand and generate text?

ABy memorizing every sentence exactly as it appears in the training data
BBy translating text into images and then back to text
CBy using fixed rules programmed by humans to generate sentences
DBy learning statistical patterns and relationships between words and phrases
Attempts:
2 left
💡 Hint

Think about how LLMs predict the next word based on previous words.

Predict Output
intermediate
2:00remaining
Output of a simple token prediction example

Given a very simple model that predicts the next word based on previous words, what will be the output?

Prompt Engineering / GenAI
context = ['I', 'love', 'to']
possible_next_words = {'eat': 0.6, 'sleep': 0.3, 'run': 0.1}
predicted_word = max(possible_next_words, key=possible_next_words.get)
print(' '.join(context + [predicted_word]))
AI love to eat
BI love to run
CI love to sleep
DI love to
Attempts:
2 left
💡 Hint

Look for the word with the highest probability in possible_next_words.

Model Choice
advanced
2:00remaining
Choosing the right model architecture for text generation

You want to build a model that can generate human-like text by predicting the next word in a sentence. Which model architecture is best suited for this task?

AK-Nearest Neighbors (KNN) classifier
BSupport Vector Machine (SVM) for binary classification
CRecurrent Neural Network (RNN) or Transformer designed for sequential data
DConvolutional Neural Network (CNN) designed for image recognition
Attempts:
2 left
💡 Hint

Think about models that handle sequences and context well.

Metrics
advanced
2:00remaining
Evaluating text generation quality

Which metric is commonly used to measure how well a language model predicts the next word in a sequence?

APerplexity, which measures how surprised the model is by the text
BMean Squared Error (MSE) used for regression tasks
CAccuracy of classifying images
DF1 score used for balanced classification
Attempts:
2 left
💡 Hint

This metric is lower when the model predicts text better.

🔧 Debug
expert
3:00remaining
Identifying the cause of poor text generation

A language model generates repetitive and nonsensical text after training. What is the most likely cause?

AThe model was trained on images instead of text
BThe training data was too small or not diverse enough
CThe optimizer was set to a very high learning rate causing perfect convergence
DThe model used too many layers and overfitted perfectly
Attempts:
2 left
💡 Hint

Think about what happens if the model sees only limited examples.