0
0
NLPml~20 mins

Beam search decoding in NLP - Practice Problems & Coding Challenges

Choose your learning style9 modes available
Challenge - 5 Problems
🎖️
Beam Search Mastery
Get all challenges correct to earn this badge!
Test your skills under time pressure!
🧠 Conceptual
intermediate
2:00remaining
What is the main purpose of beam search decoding in NLP?

Beam search is often used in sequence generation tasks like translation or text generation. What does beam search primarily help with?

AIt randomly selects sequences to generate diverse outputs without considering probabilities.
BIt finds the single most probable output sequence by exploring all possible sequences exhaustively.
CIt balances between exploring multiple candidate sequences and focusing on the most promising ones to find likely outputs efficiently.
DIt guarantees finding the globally optimal sequence by checking every possible output.
Attempts:
2 left
💡 Hint

Think about how beam search keeps track of multiple candidates but limits the number to avoid huge computation.

Predict Output
intermediate
2:00remaining
Output of beam search step with beam width 2

Given the following partial sequences and their log probabilities, what are the top 2 sequences after expanding one step?

partial_sequences = [("I am", -1.0), ("You are", -1.2)]
candidates = {
  "I am": [("happy", -0.5), ("sad", -1.5)],
  "You are": [("kind", -0.3), ("mean", -2.0)]
}
beam_width = 2

Calculate the new sequences with summed log probabilities and pick top 2.

A[('I am happy', -1.5), ('You are kind', -1.5)]
B[('I am happy', -1.5), ('You are mean', -3.2)]
C[('You are kind', -1.5), ('I am sad', -2.5)]
D[('I am sad', -2.5), ('You are mean', -3.2)]
Attempts:
2 left
💡 Hint

Add the log probabilities of partial sequences and their expansions, then pick the top 2 with highest (least negative) sums.

Model Choice
advanced
2:00remaining
Choosing beam width for a translation model

You have a neural machine translation model. You want to balance translation quality and decoding speed. Which beam width is most suitable?

ABeam width = 5 as a compromise between quality and speed.
BBeam width = 1 (greedy search) for fastest decoding but lower quality.
CBeam width = 100 for best quality but very slow decoding.
DBeam width = 0 to disable beam search and use random sampling.
Attempts:
2 left
💡 Hint

Think about typical beam widths used in practice for good quality without too much slowdown.

Metrics
advanced
2:00remaining
Effect of beam width on BLEU score and decoding time

In an experiment, increasing beam width from 1 to 10 affects BLEU score and decoding time. Which statement is true?

ABLEU score and decoding time are unaffected by beam width.
BBLEU score improves initially but plateaus or may degrade; decoding time increases roughly linearly with beam width.
CBLEU score decreases as beam width increases; decoding time decreases.
DBLEU score always increases linearly with beam width, decoding time stays constant.
Attempts:
2 left
💡 Hint

Consider how beam search explores more sequences with larger beam widths and the tradeoff involved.

🔧 Debug
expert
2:00remaining
Why does beam search sometimes produce repetitive outputs?

A sequence generation model using beam search often outputs repetitive phrases like 'the the the'. What is the most likely cause?

ABeam search always prevents repetition, so this must be a bug in the code.
BBeam search is not exploring enough sequences due to too small beam width.
CThe input data is corrupted, causing the model to repeat tokens.
DThe model's probability distribution is biased towards repeating tokens, and beam search amplifies this by focusing on high-probability sequences.
Attempts:
2 left
💡 Hint

Think about how beam search picks sequences with highest probabilities and how model biases affect output.