Complete the code to import the observability tool in LangChain.
from langchain.[1] import get_openai_callback
The callbacks module in LangChain provides observability tools like get_openai_callback to monitor LLM usage.
Complete the code to start tracking LLM usage with a context manager.
with get_openai_callback() as [1]: response = llm(prompt)
The variable name inside the with block is commonly callback to track usage.
Fix the error in printing the total tokens used after the LLM call.
print(f"Total tokens used: { [1].total_tokens }")
The variable callback holds the context manager object, so callback.total_tokens accesses the attribute. Inside an f-string, use {callback.total_tokens} without quotes.
Fill the blanks to create a dictionary that logs prompt and token count.
log = { [1]: [2] for [3], [4] in data.items() }When iterating over data.items(), key and value are the standard variable names for keys and values.
Fill all three blanks to filter logs for tokens greater than 100.
filtered = {k: v for k, v in logs.items() if v [1] [2] and k [3] 'prompt'}This dictionary comprehension filters entries where the token count is greater than 100 and the key is not 'prompt'.