FAISS helps you quickly find similar items by comparing numbers called vectors. Setting it up lets your program search and match data fast.
FAISS vector store setup in LangChain
from langchain.vectorstores import FAISS from langchain.embeddings import OpenAIEmbeddings # Create embeddings object embeddings = OpenAIEmbeddings() # Create FAISS vector store from texts vector_store = FAISS.from_texts(texts, embeddings) # Save vector store to disk vector_store.save_local("faiss_index") # Load vector store from disk vector_store = FAISS.load_local("faiss_index", embeddings)
Use OpenAIEmbeddings() or any other embedding model to convert text to vectors.
Saving and loading lets you reuse the vector store without rebuilding it.
from langchain.vectorstores import FAISS from langchain.embeddings import OpenAIEmbeddings texts = ["Hello world", "Hi there"] embeddings = OpenAIEmbeddings() vector_store = FAISS.from_texts(texts, embeddings)
vector_store.save_local("my_faiss")vector_store = FAISS.load_local("my_faiss", embeddings)This program creates a FAISS vector store from a few fruit-related sentences, saves it, loads it again, and searches for texts similar to 'I like fruit'. It prints the two closest matches.
from langchain.vectorstores import FAISS from langchain.embeddings import OpenAIEmbeddings # Sample texts to store texts = ["I love apples", "I enjoy oranges", "Bananas are great"] # Create embeddings object embeddings = OpenAIEmbeddings() # Build FAISS vector store from texts vector_store = FAISS.from_texts(texts, embeddings) # Save the vector store locally vector_store.save_local("faiss_example") # Load the vector store back loaded_store = FAISS.load_local("faiss_example", embeddings) # Search for similar text query = "I like fruit" results = loaded_store.similarity_search(query, k=2) # Print the top 2 similar texts for i, doc in enumerate(results, 1): print(f"Result {i}: {doc.page_content}")
Make sure you have the OpenAI API key set up to use OpenAIEmbeddings.
FAISS works best with many vectors; small sets still work but are less efficient.
Saving and loading vector stores helps avoid rebuilding embeddings every time.
FAISS vector store helps find similar items fast using vectors.
Use LangChain's FAISS class with embeddings to create and manage the store.
Save your vector store to reuse it later without rebuilding.