Sentence-BERT creates vector representations (embeddings) of sentences. To check how good these embeddings are, we often use cosine similarity. This measures how close two sentence vectors are, showing if they mean similar things. For tasks like sentence similarity or clustering, cosine similarity helps us see if the model groups related sentences well.
When Sentence-BERT is used for classification or retrieval, metrics like accuracy, precision, and recall become important. These tell us how well the embeddings help the model find or classify the right sentences.