Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to start streaming the AI model's response.
Prompt Engineering / GenAI
response = model.generate_stream(prompt=[1]) Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using variable names like 'text' or 'query' which are not defined in this context.
✗ Incorrect
The method generate_stream requires the prompt parameter to specify the input text for streaming the response.
2fill in blank
mediumComplete the code to read the streamed tokens from the response.
Prompt Engineering / GenAI
for token in response.[1](): print(token)
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using generic method names like 'read' or 'stream' which do not exist.
✗ Incorrect
The method stream_tokens() is used to iterate over tokens as they are streamed from the model.
3fill in blank
hardFix the error in the code to properly handle streaming with a callback function.
Prompt Engineering / GenAI
def on_token(token): print(token) model.generate_stream(prompt=prompt, [1]=on_token)
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Passing the function with incorrect parameter names like 'on_token' or 'token_handler'.
✗ Incorrect
The parameter to pass a function to handle tokens during streaming is commonly named 'callback'.
4fill in blank
hardFill both blanks to correctly initialize streaming with a timeout and a callback.
Prompt Engineering / GenAI
response = model.generate_stream(prompt=prompt, [1]=5, [2]=handle_token)
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Confusing 'max_tokens' with 'timeout' or using wrong parameter names.
✗ Incorrect
The 'timeout' parameter sets the max wait time, and 'callback' is the function to handle tokens during streaming.
5fill in blank
hardFill both blanks to collect streamed tokens into a string with a callback and print final output.
Prompt Engineering / GenAI
collected = [] def collect_token(token): collected.[1](token) model.generate_stream(prompt=prompt, [2]=collect_token) print(''.join(collected))
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using wrong list method like 'extend' or adding extra string methods after join.
✗ Incorrect
Use 'append' to add tokens to the list, 'callback' to pass the function, and no extra method after join to print raw output.