How to Use Callbacks in LangChain for Custom Actions
In LangChain, you use
callbacks by creating a class that inherits from BaseCallbackHandler and overriding its methods to react to events during chain execution. Then, pass an instance of this class to your chain or agent via the callbacks parameter to customize behavior or logging.Syntax
To use callbacks in LangChain, define a class that extends BaseCallbackHandler. Override methods like on_chain_start, on_chain_end, or on_llm_new_token to handle specific events. Pass an instance of your callback class to the callbacks argument when running chains or agents.
This lets you run custom code at key points during execution, such as logging or modifying outputs.
python
from langchain.callbacks.base import BaseCallbackHandler class MyCallbackHandler(BaseCallbackHandler): def on_chain_start(self, serialized, inputs, **kwargs): print("Chain started with inputs:", inputs) def on_chain_end(self, outputs, **kwargs): print("Chain ended with outputs:", outputs) # Usage example: # chain.run(..., callbacks=[MyCallbackHandler()])
Example
This example shows a simple LangChain chain with a callback that prints messages when the chain starts and ends.
python
from langchain.llms import OpenAI from langchain.chains import LLMChain from langchain.prompts import PromptTemplate from langchain.callbacks.base import BaseCallbackHandler class PrintCallback(BaseCallbackHandler): def on_chain_start(self, serialized, inputs, **kwargs): print(f"Starting chain with inputs: {inputs}") def on_chain_end(self, outputs, **kwargs): print(f"Chain finished with outputs: {outputs}") # Create a prompt template prompt = PromptTemplate(template="Say hello to {name}", input_variables=["name"]) # Create an LLM instance llm = OpenAI(temperature=0) # Create the chain chain = LLMChain(llm=llm, prompt=prompt) # Run the chain with the callback result = chain.run({"name": "Alice"}, callbacks=[PrintCallback()]) print("Result:", result)
Output
Starting chain with inputs: {'name': 'Alice'}
Chain finished with outputs: {'text': 'Hello Alice!'}
Result: Hello Alice!
Common Pitfalls
- Not passing the callback instance in a list to the
callbacksparameter (it must be a list even if one callback). - Overriding callback methods but forgetting to handle required parameters.
- Using callbacks with chains or agents that do not support callbacks (check documentation).
- Printing or logging too much inside callbacks can slow down execution.
python
from langchain.callbacks.base import BaseCallbackHandler # Wrong: passing callback instance directly (not in list) # chain.run("input", callbacks=MyCallbackHandler()) # This will error # Right: pass as list # chain.run("input", callbacks=[MyCallbackHandler()])
Quick Reference
Key callback methods you can override:
on_chain_start(serialized, inputs, **kwargs): Called when a chain starts.on_chain_end(outputs, **kwargs): Called when a chain ends.on_llm_new_token(token, **kwargs): Called when a new token is generated by the LLM.on_tool_start(tool_input, **kwargs): Called when a tool starts.on_agent_action(action, **kwargs): Called when an agent takes an action.
Always pass callbacks as a list to chains or agents.
Key Takeaways
Create a callback class by extending BaseCallbackHandler and override event methods.
Pass callback instances as a list to the callbacks parameter when running chains or agents.
Use callbacks to customize or monitor chain execution like logging inputs and outputs.
Avoid heavy processing inside callbacks to keep chain performance smooth.
Check LangChain docs for supported callback methods for your chain or agent type.