Experiment - LLM wrappers
Problem:You have a large language model (LLM) that generates text, but it sometimes produces irrelevant or unsafe outputs. You want to improve the quality and safety of the responses by adding a wrapper around the LLM.
Current Metrics:Relevance score: 65%, Safety incidents: 10 per 1000 responses
Issue:The LLM outputs are not always relevant or safe, causing user dissatisfaction and potential harm.