NLP - Text GenerationWhy does beam search decoding not guarantee finding the globally optimal sequence?ABecause it always selects the longest sequenceBBecause it uses random sampling instead of scoringCBecause it only keeps a fixed number of candidates, potentially discarding the best path earlyDBecause it ignores the model's probability scoresCheck Answer
Step-by-Step SolutionSolution:Step 1: Understand beam search limitationBeam search keeps only a fixed number of top candidates at each step, which may discard the best sequence early.Step 2: Contrast with exhaustive searchUnlike exhaustive search, beam search trades completeness for efficiency, so global optimum is not guaranteed.Final Answer:Because it only keeps a fixed number of candidates, potentially discarding the best path early -> Option CQuick Check:Beam search limits candidates, so no global guarantee [OK]Quick Trick: Beam search prunes candidates, so best path may be lost [OK]Common Mistakes:MISTAKESThinking beam search uses random samplingAssuming beam search always picks longest sequenceBelieving beam search ignores scores
Master "Text Generation" in NLP9 interactive learning modes - each teaches the same concept differentlyLearnWhyDeepModelTryChallengeExperimentRecallMetrics
More NLP Quizzes Sentiment Analysis Advanced - Hybrid approaches - Quiz 14medium Sentiment Analysis Advanced - Why advanced sentiment handles nuance - Quiz 7medium Sequence Models for NLP - Embedding layer usage - Quiz 10hard Sequence Models for NLP - Bidirectional LSTM - Quiz 7medium Text Generation - Why text generation creates content - Quiz 8hard Text Generation - Temperature and sampling - Quiz 10hard Text Similarity and Search - Why similarity measures find related text - Quiz 10hard Text Similarity and Search - Why similarity measures find related text - Quiz 3easy Topic Modeling - LDA with scikit-learn - Quiz 11easy Topic Modeling - Why topic modeling discovers themes - Quiz 1easy