This visual execution shows how Elasticsearch token filters work step-by-step. First, the input text is split into tokens. Then, each token passes through the lowercase filter, which converts all letters to lowercase. Next, the stemmer filter reduces words to their root forms, like 'running' to 'run'. Finally, the synonym filter replaces tokens with their synonyms, sometimes expanding one token into multiple tokens, such as 'quick' becoming 'quick' and 'fast'. The execution table traces each token through these filters, showing how tokens change at each step. The variable tracker summarizes token states after each filter. Key moments clarify common confusions, like why synonyms expand tokens and why lowercase is applied first. The quiz tests understanding by asking about specific steps and effects of filters. This process helps Elasticsearch index and search text more effectively by normalizing and expanding tokens.