Positional encoding is used in models like Transformers to add information about the order of words or tokens. Since it is part of the input representation, the main metrics to evaluate its impact are the model's overall performance metrics such as loss and accuracy on the task (e.g., translation, classification).
We do not measure positional encoding alone but see how it helps the model learn better sequences. Lower loss and higher accuracy mean the positional encoding helps the model understand order better.