0
0
LangChainframework~10 mins

Connecting to open-source models in LangChain - Step-by-Step Execution

Choose your learning style9 modes available
Concept Flow - Connecting to open-source models
Start
Import LangChain
Choose Open-Source Model
Set Model Parameters
Create Model Instance
Send Input to Model
Receive Output
Use Output in Application
End
This flow shows how to connect to an open-source model using LangChain: import, select model, set parameters, create instance, send input, get output, then use it.
Execution Sample
LangChain
from langchain.llms import HuggingFaceHub

model = HuggingFaceHub(repo_id="google/flan-t5-small")
response = model("What is AI?")
print(response)
This code connects to an open-source model on HuggingFace Hub, sends a question, and prints the answer.
Execution Table
StepActionInput/ParametersResult/Output
1Import HuggingFaceHub from langchain.llmsNoneModule ready to use
2Create model instancerepo_id="google/flan-t5-small"Model object created
3Send input to model"What is AI?"Model processes input
4Receive outputModel generates answer"AI is the simulation of human intelligence by machines."
5Print outputOutput from modelPrinted answer on screen
💡 Execution stops after printing the model's response.
Variable Tracker
VariableStartAfter Step 2After Step 3After Step 4Final
modelNoneHuggingFaceHub instanceHuggingFaceHub instanceHuggingFaceHub instanceHuggingFaceHub instance
responseNoneNoneNone"AI is the simulation of human intelligence by machines.""AI is the simulation of human intelligence by machines."
Key Moments - 3 Insights
Why do we need to specify repo_id when creating the model?
The repo_id tells LangChain which open-source model to load from HuggingFace Hub, as shown in step 2 of the execution_table.
What happens if the model takes time to respond?
The code waits at step 3 until the model finishes processing and returns output at step 4, so the response variable updates only after completion.
Can we reuse the model variable for multiple inputs?
Yes, the model instance stays active after step 2 and can be called multiple times with different inputs without recreating it.
Visual Quiz - 3 Questions
Test your understanding
Look at the execution_table, what is the value of 'response' after step 3?
ANone
BProcessing input
C"AI is the simulation of human intelligence by machines."
DModel object created
💡 Hint
Check the variable_tracker row for 'response' after step 3.
At which step does the model instance get created?
AStep 1
BStep 2
CStep 3
DStep 4
💡 Hint
Look at the 'Action' column in execution_table for model creation.
If you want to ask another question, what should you do with the 'model' variable?
ACreate a new model instance again
BSet repo_id again
CCall the model variable with the new input
DPrint the model variable
💡 Hint
Refer to the key_moments about reusing the model instance.
Concept Snapshot
Connecting to open-source models with LangChain:
- Import the model class (e.g., HuggingFaceHub)
- Specify the model repo_id from HuggingFace Hub
- Create a model instance
- Call the model with input text
- Receive and use the output
- Reuse the model instance for multiple queries
Full Transcript
This lesson shows how to connect to open-source models using LangChain. First, you import the HuggingFaceHub class. Then you create a model instance by specifying the repo_id of the model you want to use. Next, you send input text to the model instance, which processes it and returns an output. Finally, you print or use the output in your application. The model instance can be reused for multiple inputs without recreating it. This step-by-step flow helps beginners see how the connection and data flow happen in code.