Practice - 5 Tasks
Answer the questions below
1fill in blank
easyComplete the code to enable streaming output in a LangChain LLM call.
LangChain
from langchain.llms import OpenAI llm = OpenAI(streaming=[1]) response = llm("Hello, how are you?")
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Using a string instead of a boolean for streaming parameter.
Leaving streaming as False or None, which disables streaming.
✗ Incorrect
Setting streaming=True enables the LLM to stream tokens as they are generated.
2fill in blank
mediumComplete the code to define a callback handler for streaming tokens in LangChain.
LangChain
from langchain.callbacks.base import BaseCallbackHandler class StreamHandler(BaseCallbackHandler): def on_llm_new_token(self, token: str, **kwargs): print([1])
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Printing self or kwargs instead of the token string.
Forgetting to print anything inside the method.
✗ Incorrect
The on_llm_new_token method receives each token as a string, which we print.
3fill in blank
hardFix the error in attaching the streaming callback handler to the OpenAI LLM.
LangChain
llm = OpenAI(streaming=True, callbacks=[[1]])
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Passing the class name without parentheses.
Passing a method instead of an instance.
✗ Incorrect
You must pass an instance of the handler class, not the class itself.
4fill in blank
hardFill both blanks to create a streaming LLM with a callback handler and run a prompt.
LangChain
llm = OpenAI(streaming=[1], callbacks=[[2]]) result = llm("Tell me a joke.")
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Setting streaming to False disables streaming.
Passing the class name instead of an instance for callbacks.
✗ Incorrect
Streaming must be True and the callback must be an instance of StreamHandler.
5fill in blank
hardFill all three blanks to define a streaming callback handler that collects tokens, then print the full response.
LangChain
class CollectHandler(BaseCallbackHandler): def __init__(self): self.text = "" def on_llm_new_token(self, token: str, **kwargs): self.text += [1] handler = CollectHandler() llm = OpenAI(streaming=[2], callbacks=[[3]]) response = llm("Say something nice.") print(handler.text)
Drag options to blanks, or click blank then click option'
Attempts:
3 left
💡 Hint
Common Mistakes
Adding a space string instead of the token.
Passing the class name instead of the instance as callback.
Setting streaming to False.
✗ Incorrect
Add each token to self.text, enable streaming=True, and pass the handler instance.