0
0
LangchainHow-ToBeginner ยท 3 min read

How to Set Temperature in Langchain for Language Models

In Langchain, you set the temperature by passing the temperature parameter when creating a language model instance like OpenAI. The temperature controls randomness: lower values make output more focused, higher values make it more creative.
๐Ÿ“

Syntax

To set temperature in Langchain, provide the temperature argument when initializing the language model. For example, with the OpenAI class, use OpenAI(temperature=0.7).

The temperature value is a float between 0 and 1 (sometimes higher), where 0 means very deterministic output and values closer to 1 increase randomness.

python
from langchain.llms import OpenAI

llm = OpenAI(temperature=0.7)
๐Ÿ’ป

Example

This example shows how to create an OpenAI language model with a temperature of 0.9 and generate a text completion. Higher temperature makes the output more creative and varied.

python
from langchain.llms import OpenAI

# Create the language model with temperature 0.9
llm = OpenAI(temperature=0.9)

# Generate a completion
response = llm("Write a short, creative poem about the sun.")
print(response)
Output
The sun dances high, a golden flame, Warming earth with gentle claim. Bright rays weave through sky so blue, A daily gift, forever new.
โš ๏ธ

Common Pitfalls

Common mistakes include:

  • Not setting the temperature parameter, which defaults to 1 and may produce very random output.
  • Using values outside the typical range (0 to 1), which can cause unexpected behavior.
  • Confusing temperature with other parameters like top_p or max_tokens.

Always check your model's documentation for supported temperature ranges.

python
from langchain.llms import OpenAI

# Wrong: temperature as string (will cause error)
# llm = OpenAI(temperature='high')

# Right: temperature as float
llm = OpenAI(temperature=0.5)
๐Ÿ“Š

Quick Reference

ParameterDescriptionTypical Values
temperatureControls randomness of output0.0 (deterministic) to 1.0 (creative)
top_pAlternative to temperature for nucleus sampling0.0 to 1.0
max_tokensMaximum tokens to generateInteger, e.g., 100
model_nameName of the language modele.g., 'gpt-4', 'text-davinci-003'
โœ…

Key Takeaways

Set temperature in Langchain by passing the temperature parameter when creating the language model instance.
Temperature controls how creative or focused the output is: lower means more focused, higher means more creative.
Use float values between 0 and 1 for temperature; avoid invalid types or out-of-range values.
Check your model's documentation for supported temperature ranges and defaults.
Temperature is different from other parameters like top_p or max_tokens.