Seeing trace details and latency helps you understand how your LangChain app works step-by-step and how fast each part runs.
Viewing trace details and latency in LangChain
from langchain.callbacks import get_openai_callback with get_openai_callback() as cb: result = chain.run("Your input here") print(cb) # For detailed tracing, use tracing tools or callbacks that show step info and timing.
The get_openai_callback() helps measure token usage and cost for OpenAI calls.
For full trace details, LangChain supports callback handlers that log each step's input, output, and time.
from langchain.callbacks import get_openai_callback with get_openai_callback() as cb: result = chain.run("Hello") print(cb)
from langchain.callbacks.tracers import LangChainTracer from langchain.chains import LLMChain tracer = LangChainTracer() chain = LLMChain(llm=llm, prompt=prompt, callbacks=[tracer]) result = chain.run("Hello") print(tracer.get_trace())
This program runs a simple LangChain chain that says hello to a name. It uses a tracer callback to capture detailed trace info including inputs, outputs, and latency. It prints the final result and the trace details.
from langchain.llms import OpenAI from langchain.chains import LLMChain from langchain.prompts import PromptTemplate from langchain.callbacks.tracers import LangChainTracer # Create a simple prompt prompt = PromptTemplate(template="Say hello to {name}", input_variables=["name"]) # Initialize OpenAI LLM llm = OpenAI(temperature=0) # Create tracer to capture trace details tracer = LangChainTracer() # Create chain with tracer callback chain = LLMChain(llm=llm, prompt=prompt, callbacks=[tracer]) # Run chain result = chain.run({"name": "Alice"}) # Print result and trace details print("Result:", result) print("Trace details:", tracer.get_trace())
Trace details include inputs, outputs, start and end times, and latency for each step.
Latency helps find slow parts to optimize.
Use callbacks to customize what trace info you want to collect.
Viewing trace details helps you see what happens inside your LangChain app.
Latency shows how long each step takes, helping find slow parts.
Use built-in callbacks like LangChainTracer to get detailed trace info easily.