Experiment - Sentence-BERT for embeddings
Problem:You want to create meaningful sentence embeddings using Sentence-BERT to compare sentence similarity. The current model embeddings do not capture semantic similarity well.
Current Metrics:Cosine similarity between similar sentences is around 0.4, and between different sentences is around 0.6, which is incorrect.
Issue:The model embeddings are not well trained or fine-tuned, causing poor semantic similarity results.