0
0
Prompt Engineering / GenAIml~20 mins

Text chunking strategies in Prompt Engineering / GenAI - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Text Chunking Master
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
Why use text chunking in language models?
Which of the following best explains why text chunking is important when processing long documents with language models?
AIt helps break down long texts into smaller parts so the model can process them without losing context due to input length limits.
BIt increases the total number of words in the text to improve model accuracy.
CIt removes all punctuation to simplify the text for the model.
DIt translates the text into multiple languages before processing.
Attempts:
2 left
💡 Hint
Think about model input size limits and how chunking helps manage them.
Predict Output
intermediate
2:00remaining
Output of text chunking code
What is the output of this Python code that chunks text into parts of 5 words each?
Prompt Engineering / GenAI
text = 'Machine learning helps computers learn from data and improve over time'
words = text.split()
chunks = [' '.join(words[i:i+5]) for i in range(0, len(words), 5)]
print(chunks)
A['Machine learning helps computers', 'learn from data and improve', 'over time']
B['Machine learning helps computers learn', 'from data and improve over', 'time']
C['Machine learning helps computers learn from', 'data and improve over time']
D['Machine learning helps', 'computers learn from data', 'and improve over time']
Attempts:
2 left
💡 Hint
Look at how the range and slicing work with step 5.
Model Choice
advanced
2:00remaining
Choosing chunk size for a transformer model
You want to chunk a large document for a transformer model with a maximum input length of 512 tokens. Which chunk size is best to avoid losing context and stay within limits?
AChunk size of 50 tokens to keep chunks very small.
BChunk size of 512 tokens to exactly match the model's max input length.
CChunk size of 1000 tokens to reduce the number of chunks.
DChunk size of 256 tokens to allow some overlap and context between chunks.
Attempts:
2 left
💡 Hint
Think about balancing chunk size and context overlap.
Metrics
advanced
2:00remaining
Evaluating chunking impact on model accuracy
After chunking text differently, you get these model accuracies on a classification task: - Chunk size 128 tokens: 85% - Chunk size 256 tokens: 88% - Chunk size 512 tokens: 86% Which chunk size likely balances context and input size best?
A128 tokens, because smaller chunks always improve accuracy.
B512 tokens, because larger chunks contain more information.
C256 tokens, because it gives the highest accuracy by balancing chunk size and context.
DAll chunk sizes perform the same, so chunking does not matter.
Attempts:
2 left
💡 Hint
Look at which chunk size yields the best accuracy.
🔧 Debug
expert
2:00remaining
Debugging chunk overlap code
What error or output does this code produce? text = 'AI models need context to understand text better' words = text.split() overlap = 2 chunk_size = 5 chunks = [] for i in range(0, len(words), chunk_size - overlap): chunk = words[i:i+chunk_size] chunks.append(' '.join(chunk)) print(chunks)
A['AI models need context to', 'context to understand text better', 'text better']
B['AI models need context to', 'need context to understand', 'to understand text better']
C['AI models need context to', 'models need context to understand', 'context to understand text better']
DIndexError because the loop steps cause out-of-range slicing.
Attempts:
2 left
💡 Hint
Check how the loop increments and slicing work with overlap.