0
0
LangChainframework~10 mins

Viewing trace details and latency in LangChain - Step-by-Step Execution

Choose your learning style9 modes available
Concept Flow - Viewing trace details and latency
Start LangChain Call
Initialize Trace
Execute Chain Step
Record Step Details & Latency
Check More Steps?
YesExecute Chain Step
No
Compile Trace Summary
Return Trace Data & Latency Info
End
Shows how LangChain starts a call, records each step's details and latency, then compiles and returns the full trace.
Execution Sample
LangChain
from langchain.chains import LLMChain
from langchain.callbacks import get_openai_callback

chain = LLMChain(...)  # Initialize your chain here

with get_openai_callback() as cb:
    result = chain.run("Hello")
    print(cb)
Runs a LangChain chain with a callback that tracks and prints trace details and latency.
Execution Table
StepActionTrace Detail RecordedLatency (ms)Notes
1Start chain runTrace initialized0Begin tracking
2Call LLM with input 'Hello'Input recorded120LLM processing time
3Receive LLM outputOutput recorded5Output received
4Post-process outputPost-processing step recorded10Formatting output
5End chain runTotal latency calculated135Sum of all steps
6Print callback infoTrace summary printed0Shows tokens used and latency
💡 Chain run completes after all steps, total latency 135 ms recorded
Variable Tracker
VariableStartAfter Step 2After Step 3After Step 4Final
trace_details{}{"input": "Hello"}{"output": "Hi!"}{"post_process": "Formatted"}{"input": "Hello", "output": "Hi!", "post_process": "Formatted"}
latency_ms0120125135135
Key Moments - 2 Insights
Why does latency increase after each step in the execution_table?
Each step adds its processing time to the total latency, as shown in rows 2 to 5 where times accumulate.
What does the trace_details variable hold after step 4?
It holds all recorded data from input, output, and post-processing steps, as seen in variable_tracker after step 4.
Visual Quiz - 3 Questions
Test your understanding
Look at the execution_table at step 3, what trace detail is recorded?
AInput to the LLM
BPost-processing details
COutput from the LLM
DTotal latency
💡 Hint
Check the 'Trace Detail Recorded' column at step 3 in execution_table
At which step does the total latency get calculated?
AStep 3
BStep 5
CStep 2
DStep 6
💡 Hint
Look for 'Total latency calculated' in the 'Trace Detail Recorded' column
If the LLM call latency increased to 200 ms, how would latency_ms change after step 2?
AIt would increase to 200 ms
BIt would decrease
CIt would stay 120 ms
DIt would reset to 0
💡 Hint
Refer to latency_ms values in variable_tracker after step 2
Concept Snapshot
LangChain tracing captures each step's input, output, and processing time.
Use callbacks like get_openai_callback() to record trace details.
Latency accumulates as each step runs.
Trace data helps debug and optimize chain performance.
Print callback info to see tokens used and total latency.
Full Transcript
This visual trace shows how LangChain records detailed trace information and latency during a chain run. The process starts by initializing the trace, then executing each step such as calling the language model, receiving output, and post-processing. Each step's details and latency are recorded and accumulated. After all steps complete, the total latency is calculated and the trace summary is printed. Variables like trace_details hold the input, output, and processing info, while latency_ms tracks the time spent. This helps learners see how LangChain tracks performance and data flow step-by-step.