0
0
LangChainframework~5 mins

Contextual compression in LangChain

Choose your learning style9 modes available
Introduction

Contextual compression helps reduce the size of text data while keeping its important meaning. This makes it easier and faster to process in language models.

When you want to send large text data to a language model but need to save space or speed up processing.
When you want to keep only the most important parts of a conversation or document for analysis.
When you want to improve performance by reducing unnecessary details in text inputs.
When working with limited memory or bandwidth and need smaller text chunks.
When preparing text data for embedding or search to focus on key information.
Syntax
LangChain
from langchain.llms import OpenAI
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from langchain.retrievers.document_compressors import LLMChainCompressor
from langchain.schema import Document

# Create a language model instance
llm = OpenAI(temperature=0)

# Create a prompt for compression
prompt = PromptTemplate(
    template="Compress the following text while keeping key information:\n\n{context}",
    input_variables=["context"]
)

# Create the LLM chain for compression
llm_chain = LLMChain(llm=llm, prompt=prompt)

# Create the compressor
compressor = LLMChainCompressor(llm_chain=llm_chain)

# Compress text (as document)
compressed_docs = compressor.compress_documents([Document(page_content=long_text)])
compressed_text = compressed_docs[0].page_content

The LLMChainCompressor uses a language model chain to contextually compress documents.

You need to provide an LLMChain instance (with prompt) to create the compressor.

Examples
This example shows how to create a compressor and compress a simple text string.
LangChain
from langchain.llms import OpenAI
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from langchain.retrievers.document_compressors import LLMChainCompressor
from langchain.schema import Document

llm = OpenAI(temperature=0)
prompt = PromptTemplate(input_variables=["context"], template="Compress: {context}")
llm_chain = LLMChain(llm=llm, prompt=prompt)
compressor = LLMChainCompressor(llm_chain=llm_chain)

text = "This is a long text that needs to be compressed."
compressed = compressor.compress_documents([Document(page_content=text)])[0].page_content
This example compresses a longer text, useful for documents or conversations.
LangChain
from langchain.llms import OpenAI
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from langchain.retrievers.document_compressors import LLMChainCompressor
from langchain.schema import Document

llm = OpenAI(temperature=0)
prompt = PromptTemplate(input_variables=["context"], template="Compress: {context}")
llm_chain = LLMChain(llm=llm, prompt=prompt)
compressor = LLMChainCompressor(llm_chain=llm_chain)

long_text = """Here is a very long document or conversation that you want to shorten while keeping the main ideas."""
compressed_text = compressor.compress_documents([Document(page_content=long_text)])[0].page_content
Sample Program

This program shows how to compress a paragraph using LLMChainCompressor. It prints the original and compressed text lengths and the compressed text itself.

LangChain
from langchain.llms import OpenAI
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
from langchain.retrievers.document_compressors import LLMChainCompressor
from langchain.schema import Document

# Initialize the language model
llm = OpenAI(temperature=0)

# Create compression prompt
prompt = PromptTemplate(
    input_variables=["context"],
    template="Compress the following text while preserving key meaning: {context}"
)

# Create LLM chain
llm_chain = LLMChain(llm=llm, prompt=prompt)

# Create compressor
compressor = LLMChainCompressor(llm_chain=llm_chain)

# Original long text
long_text = """LangChain helps developers build applications with language models. It provides tools to manage prompts, chains, and memory. Contextual compression reduces text size while keeping meaning."""

# Compress the text
doc = Document(page_content=long_text)
compressed_docs = compressor.compress_documents([doc])
compressed_text = compressed_docs[0].page_content

print("Original text length:", len(long_text))
print("Compressed text:", compressed_text)
print("Compressed text length:", len(compressed_text))
OutputSuccess
Important Notes

Contextual compression depends on the quality of the language model used.

It works best on longer texts where summarizing can reduce size significantly.

Always check the compressed output to ensure important details are kept.

Summary

Contextual compression shrinks text while keeping its main meaning.

It uses language models to summarize or compress text smartly.

Useful for saving space and speeding up text processing in language apps.