Experiment - Document loading and chunking strategies
Problem:You want to load large text documents and split them into smaller pieces (chunks) so an AI agent can understand and process them better. Currently, the chunks are either too big or too small, causing slow processing or loss of important context.
Current Metrics:Average chunk size: 2000 characters; Processing time per document: 15 seconds; Context loss rate: 30%
Issue:Chunks are too large causing slow processing and some chunks miss important context because splitting is not smart.