Experiment - Why RAG grounds LLMs in real data
Problem:You have a large language model (LLM) that generates text but sometimes makes up facts because it only uses its internal knowledge. This causes incorrect or outdated answers.
Current Metrics:Accuracy on fact-based questions: 65%, with many hallucinations (made-up facts).
Issue:The LLM is not grounded in real, up-to-date data, leading to low factual accuracy and hallucinations.