0
0
LangChainframework~5 mins

Viewing trace details and latency in LangChain

Choose your learning style9 modes available
Introduction

Seeing trace details and latency helps you understand how your LangChain app works step-by-step and how fast each part runs.

You want to check which part of your chain is slow.
You need to debug why your app is not giving expected results.
You want to improve performance by finding bottlenecks.
You want to see the inputs and outputs of each step clearly.
You want to log detailed info for monitoring your app.
Syntax
LangChain
from langchain.callbacks import get_openai_callback

with get_openai_callback() as cb:
    result = chain.run("Your input here")
    print(cb)

# For detailed tracing, use tracing tools or callbacks that show step info and timing.

The get_openai_callback() helps measure token usage and cost for OpenAI calls.

For full trace details, LangChain supports callback handlers that log each step's input, output, and time.

Examples
This shows token usage and cost for the OpenAI call inside the chain.
LangChain
from langchain.callbacks import get_openai_callback

with get_openai_callback() as cb:
    result = chain.run("Hello")
    print(cb)
This example uses a tracer callback to get detailed trace info of the chain execution.
LangChain
from langchain.callbacks.tracers import LangChainTracer
from langchain.chains import LLMChain

tracer = LangChainTracer()
chain = LLMChain(llm=llm, prompt=prompt, callbacks=[tracer])

result = chain.run("Hello")
print(tracer.get_trace())
Sample Program

This program runs a simple LangChain chain that says hello to a name. It uses a tracer callback to capture detailed trace info including inputs, outputs, and latency. It prints the final result and the trace details.

LangChain
from langchain.llms import OpenAI
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from langchain.callbacks.tracers import LangChainTracer

# Create a simple prompt
prompt = PromptTemplate(template="Say hello to {name}", input_variables=["name"])

# Initialize OpenAI LLM
llm = OpenAI(temperature=0)

# Create tracer to capture trace details
tracer = LangChainTracer()

# Create chain with tracer callback
chain = LLMChain(llm=llm, prompt=prompt, callbacks=[tracer])

# Run chain
result = chain.run({"name": "Alice"})

# Print result and trace details
print("Result:", result)
print("Trace details:", tracer.get_trace())
OutputSuccess
Important Notes

Trace details include inputs, outputs, start and end times, and latency for each step.

Latency helps find slow parts to optimize.

Use callbacks to customize what trace info you want to collect.

Summary

Viewing trace details helps you see what happens inside your LangChain app.

Latency shows how long each step takes, helping find slow parts.

Use built-in callbacks like LangChainTracer to get detailed trace info easily.