Performance: Handling follow-up questions
MEDIUM IMPACT
This affects the responsiveness and smoothness of conversational AI interactions by managing context efficiently.
const memory = new BufferMemory();
const chain = new ConversationChain({ llm: openai, memory });
await chain.call({ input: 'What is AI?' });
await chain.call({ input: 'And how does it work?' });const chain = new ConversationChain({ llm: openai });
await chain.call({ input: 'What is AI?' });
await chain.call({ input: 'And how does it work?' });| Pattern | Context Management | LLM Calls | Response Time | Verdict |
|---|---|---|---|---|
| No context reuse | None | Multiple full calls | High latency | [X] Bad |
| Context reuse with memory | Efficient | Single incremental calls | Lower latency | [OK] Good |