For semantic similarity using embeddings, the key metric is cosine similarity. This measures how close two vectors point in the same direction, regardless of their length. It tells us how similar two pieces of text are in meaning.
Why cosine similarity? Because embeddings are numeric vectors representing meaning, and cosine similarity captures the angle between them, which reflects semantic closeness well.
Sometimes, we also use Euclidean distance or Manhattan distance, but cosine similarity is most common and intuitive for meaning comparison.