0
0
Prompt Engineering / GenAIml~20 mins

Context window and token limits in Prompt Engineering / GenAI - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Context Window Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Understanding context window size
If a language model has a context window of 2048 tokens, what happens when you input a text longer than 2048 tokens?
AThe model processes only the last 2048 tokens, ignoring the earlier ones.
BThe model processes all tokens by splitting them into multiple windows automatically.
CThe model raises an error and refuses to process the input.
DThe model processes only the first 2048 tokens and ignores the rest.
Attempts:
2 left
💡 Hint
Think about how models handle inputs that exceed their maximum token capacity.
Predict Output
intermediate
2:00remaining
Token count calculation
Given the following Python code using the Hugging Face tokenizer, what is the output of the print statement?
Prompt Engineering / GenAI
from transformers import GPT2Tokenizer

tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
text = 'Hello world! This is a test.'
tokens = tokenizer.encode(text)
print(len(tokens))
A5
B8
C6
D7
Attempts:
2 left
💡 Hint
Count how the tokenizer splits the sentence into tokens.
Hyperparameter
advanced
2:00remaining
Choosing context window size for training
When training a transformer model, increasing the context window size from 512 to 2048 tokens will most likely:
AIncrease memory usage and training time significantly.
BDecrease the model's ability to understand long texts.
CReduce the number of model parameters.
DMake the model train faster due to fewer batches.
Attempts:
2 left
💡 Hint
Think about how longer sequences affect computation in transformers.
Metrics
advanced
2:00remaining
Effect of token limits on model evaluation
If a model's context window is 1024 tokens but the evaluation dataset contains samples of 1500 tokens, what is the likely effect on the evaluation metrics?
AMetrics improve because longer inputs provide more information.
BMetrics may be worse because the model cannot see the full input context.
CMetrics remain unchanged as the model truncates inputs automatically without impact.
DMetrics become invalid because the model crashes on long inputs.
Attempts:
2 left
💡 Hint
Consider how truncating input affects model understanding.
🔧 Debug
expert
3:00remaining
Diagnosing token limit errors in generation
You use a language model with a 2048 token limit. Your code generates text by appending new tokens to the input prompt repeatedly. After some iterations, generation fails with a token limit error. What is the best way to fix this?
AIncrease the model's token limit by changing a parameter in the code.
BRestart generation from scratch every time to avoid token buildup.
CTruncate the oldest tokens from the prompt to keep total tokens under 2048.
DIgnore the error and continue generating tokens beyond the limit.
Attempts:
2 left
💡 Hint
Think about how to keep the input size manageable during generation.