How to Set Temperature in Langchain for Language Models
In Langchain, you set the temperature by passing the
temperature parameter when creating a language model instance like OpenAI. The temperature controls randomness: lower values make output more focused, higher values make it more creative.Syntax
To set temperature in Langchain, provide the temperature argument when initializing the language model. For example, with the OpenAI class, use OpenAI(temperature=0.7).
The temperature value is a float between 0 and 1 (sometimes higher), where 0 means very deterministic output and values closer to 1 increase randomness.
python
from langchain.llms import OpenAI llm = OpenAI(temperature=0.7)
Example
This example shows how to create an OpenAI language model with a temperature of 0.9 and generate a text completion. Higher temperature makes the output more creative and varied.
python
from langchain.llms import OpenAI # Create the language model with temperature 0.9 llm = OpenAI(temperature=0.9) # Generate a completion response = llm("Write a short, creative poem about the sun.") print(response)
Output
The sun dances high, a golden flame,
Warming earth with gentle claim.
Bright rays weave through sky so blue,
A daily gift, forever new.
Common Pitfalls
Common mistakes include:
- Not setting the temperature parameter, which defaults to 1 and may produce very random output.
- Using values outside the typical range (0 to 1), which can cause unexpected behavior.
- Confusing temperature with other parameters like
top_pormax_tokens.
Always check your model's documentation for supported temperature ranges.
python
from langchain.llms import OpenAI # Wrong: temperature as string (will cause error) # llm = OpenAI(temperature='high') # Right: temperature as float llm = OpenAI(temperature=0.5)
Quick Reference
| Parameter | Description | Typical Values |
|---|---|---|
| temperature | Controls randomness of output | 0.0 (deterministic) to 1.0 (creative) |
| top_p | Alternative to temperature for nucleus sampling | 0.0 to 1.0 |
| max_tokens | Maximum tokens to generate | Integer, e.g., 100 |
| model_name | Name of the language model | e.g., 'gpt-4', 'text-davinci-003' |
Key Takeaways
Set temperature in Langchain by passing the temperature parameter when creating the language model instance.
Temperature controls how creative or focused the output is: lower means more focused, higher means more creative.
Use float values between 0 and 1 for temperature; avoid invalid types or out-of-range values.
Check your model's documentation for supported temperature ranges and defaults.
Temperature is different from other parameters like top_p or max_tokens.