0
0
LangChainframework~10 mins

Connecting to OpenAI models in LangChain - Step-by-Step Execution

Choose your learning style9 modes available
Concept Flow - Connecting to OpenAI models
Import LangChain OpenAI class
Create OpenAI instance with API key
Call model with prompt
Receive response from OpenAI
Use or display the response
This flow shows how to import, create, call, and get a response from an OpenAI model using LangChain.
Execution Sample
LangChain
import os
from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage

llm = ChatOpenAI(openai_api_key=os.getenv("OPENAI_API_KEY"), model_name="gpt-4", temperature=0.7)
response = llm([HumanMessage(content="Hello!")])
print(response.content)
This code connects to the OpenAI GPT-4 model, sends a greeting prompt, and prints the model's reply.
Execution Table
StepActionInput/StateOutput/Result
1Import ChatOpenAI classNoneChatOpenAI class ready to use
2Create OpenAI instanceopenai_api_key=os.getenv('OPENAI_API_KEY'), model_name='gpt-4', temperature=0.7llm object created with settings
3Call model with prompt[HumanMessage(content="Hello!")]Request sent to OpenAI API
4Receive responseOpenAI processes promptResponse object with content text
5Print response contentresponse.contentPrinted text from model reply
💡 Execution stops after printing the model's response.
Variable Tracker
VariableStartAfter Step 2After Step 3After Step 4Final
llmNoneChatOpenAI instanceSame instanceSame instanceSame instance
responseNoneNoneResponse objectResponse objectResponse object with content
Key Moments - 3 Insights
Why do we need to create an instance of ChatOpenAI before calling the model?
Creating the instance sets up the model name and parameters like temperature, which configures how the model responds. This is shown in step 2 of the execution_table.
What does the 'call' method expect as input?
The 'call' method expects a list of BaseMessage objects, like [HumanMessage(content="Hello!")]. This is shown in step 3.
Where does the actual AI response come from?
The response comes from the OpenAI API after processing the prompt, shown in step 4 where the response object is received.
Visual Quiz - 3 Questions
Test your understanding
Look at the execution_table, what is the state of 'llm' after step 2?
AIt is None
BIt is a ChatOpenAI instance configured with model and temperature
CIt contains the response from the model
DIt is a string with the prompt
💡 Hint
Check the variable_tracker row for 'llm' after step 2.
At which step does the program send the prompt to the OpenAI API?
AStep 3
BStep 4
CStep 1
DStep 5
💡 Hint
Look at the 'Action' and 'Output/Result' columns in the execution_table.
If you change the temperature parameter to 0.0 when creating 'llm', what changes in the execution?
AThe model name changes
BThe prompt sent changes
CThe model's response becomes more deterministic
DThe response is printed differently
💡 Hint
Temperature controls randomness in the model's output, set in step 2.
Concept Snapshot
Connecting to OpenAI models with LangChain:
- Import ChatOpenAI from langchain.chat_models and HumanMessage from langchain.schema
- Create instance: llm = ChatOpenAI(openai_api_key=os.getenv("OPENAI_API_KEY"), model_name="gpt-4", temperature=0.7)
- Call model with messages: llm([HumanMessage(content="Hello!")])
- Receive and use response.content
- Temperature controls response creativity
Full Transcript
To connect to OpenAI models using LangChain, first import the ChatOpenAI class and HumanMessage. Then create an instance with your API key, chosen model name and parameters like temperature. Next, call the model with a list of messages like [HumanMessage(content='Hello!')]. The model sends the prompt to OpenAI's API and returns a response object. Finally, access the response content to see the model's reply. This process sets up the connection, sends input, and receives output step-by-step.