Word2Vec creates word vectors that capture meaning. To check if it works well, we use cosine similarity. This measures how close two word vectors are in meaning. A higher cosine similarity means words are used in similar ways.
We also look at analogy accuracy. For example, if the model can solve "king - man + woman = ?" and find "queen", it shows the embeddings learned meaningful relationships.