Complete the code to set a maximum length for the generated text.
output = model.generate(input_text, max_length=[1])The max_length parameter limits how many tokens the model generates, controlling output length.
Complete the code to stop generation when a specific word appears.
output = model.generate(input_text, stop=[1])The stop parameter tells the model to stop generating when it sees the given string.
Fix the error in the code to prevent generating harmful content by setting a safety filter.
output = model.generate(input_text, [1]=True)
Setting enable_safety_filter=True activates content filters to avoid harmful outputs.
Fill both blanks to set temperature and top_p for safer, more focused output.
output = model.generate(input_text, temperature=[1], top_p=[2])
Lower temperature (like 0.2) reduces randomness. Setting top_p to 0.8 limits token choices to the top 80% probability, making output safer and focused.
Fill all three blanks to add output guardrails: max tokens, stop sequence, and safety filter.
output = model.generate(input_text, max_tokens=[1], stop=[2], [3]=True)
Set max_tokens to 100 to limit output length, stop to '\n' to end generation at a newline, and enable_safety_filter=True to prevent harmful content.