LangSmith evaluators are tools that check how well a model's output matches a reference answer. The process starts by giving the evaluator the model's prediction and the correct reference. The evaluator then runs its logic to compare these two inputs. This comparison results in a score and sometimes feedback explaining the quality. For example, a StringEvaluator compares text strings and returns a score close to 1 if they are very similar. The evaluation ends by returning this score and feedback. This helps developers understand how accurate or good their model's outputs are.