Experiment - Beam search decoding
Problem:You have a sequence-to-sequence model for text generation. Currently, it uses greedy decoding which picks the most likely next word at each step. This leads to suboptimal sentences that may miss better overall sequences.
Current Metrics:Average BLEU score on validation set: 25.3%
Issue:Greedy decoding often produces less diverse and lower quality sentences because it only considers the best next word at each step, missing better overall sequences.