When using pre-trained embeddings, the key metrics depend on the task you apply them to. For example, if embeddings are used for text classification, accuracy, precision, and recall matter because they show how well the model understands the text meanings. For similarity tasks, cosine similarity or mean squared error between embeddings are important to measure how close the embeddings represent similar meanings.
Pre-trained embeddings help models start with good word meanings, so metrics show if this helps the model learn better or faster.