Observability helps you understand how your large language model (LLM) app works and find problems quickly. It shows what is happening inside your app so you can fix issues and improve it.
0
0
Why observability is essential for LLM apps in LangChain
Introduction
When you want to see how users interact with your LLM app in real time.
When you need to find and fix errors or unexpected results from the LLM.
When you want to improve the app's responses by analyzing past behavior.
When you want to monitor the app's performance and resource use.
When you want to keep your app reliable and trustworthy for users.
Syntax
LangChain
from langchain.callbacks import get_openai_callback with get_openai_callback() as cb: response = llm('Your prompt here') print(cb)
This example shows how to use a callback to observe LLM usage like tokens and cost.
Observability tools often include logging, tracing, and metrics to track app behavior.
Examples
This example prints how many tokens the LLM used for a prompt.
LangChain
from langchain.callbacks import get_openai_callback with get_openai_callback() as cb: response = llm('Hello, how are you?') print(f'Tokens used: {cb.total_tokens}')
This example shows how to print LLM events directly to the console for easy monitoring.
LangChain
from langchain.llms import OpenAI from langchain.callbacks import StdOutCallbackHandler llm = OpenAI(callbacks=[StdOutCallbackHandler()]) response = llm('Tell me a joke')
Sample Program
This program sends a question to the LLM and uses observability to print the response, tokens used, and cost. It helps you see how much your app is using the LLM and what it returns.
LangChain
from langchain.llms import OpenAI from langchain.callbacks import get_openai_callback llm = OpenAI() with get_openai_callback() as cb: response = llm('What is the capital of France?') print('Response:', response) print(f'Total tokens used: {cb.total_tokens}') print(f'Total cost: ${cb.total_cost:.6f}')
OutputSuccess
Important Notes
Observability helps catch issues early before users notice them.
Tracking tokens and cost helps manage your budget when using paid LLM services.
Use callbacks and logging to get detailed insights into your app's behavior.
Summary
Observability shows what happens inside your LLM app.
It helps find errors, improve responses, and monitor costs.
Using callbacks in LangChain is an easy way to add observability.