Bird
0
0

A user inputs 120 tokens to a model with a 100-token context window, but the model only processes 80 tokens. What could explain this?

medium📝 Analysis Q6 of 15
AI for Everyone - How AI Models Actually Work
A user inputs 120 tokens to a model with a 100-token context window, but the model only processes 80 tokens. What could explain this?
AThe model ignores tokens beyond 80 due to a bug
BThe model always processes fewer tokens than the context window
CThe user input was automatically shortened to 80 tokens by the model
DThe model's effective context window is smaller due to internal tokenization or system limits
Step-by-Step Solution
Solution:
  1. Step 1: Recognize context window limits

    The nominal context window is 100 tokens, but effective limits can be smaller.
  2. Step 2: Consider tokenization and system constraints

    Tokenization or system overhead can reduce usable tokens to less than 100.
  3. Final Answer:

    The model's effective context window is smaller due to internal tokenization or system limits -> Option D
  4. Quick Check:

    Effective context window can be less than nominal [OK]
Quick Trick: Effective context window can be less than nominal [OK]
Common Mistakes:
  • Assuming model always processes full context window
  • Believing input is shortened automatically without user action

Want More Practice?

15+ quiz questions · All difficulty levels · Free

Free Signup - Practice All Questions
More AI for Everyone Quizzes