This visual trace shows how Elasticsearch uses edge n-gram tokenizer to support autocomplete. When indexing, words like 'michael' are broken into prefix tokens such as 'm', 'mi', 'mic', etc. When a user types a prefix like 'mic', Elasticsearch searches for tokens starting with 'mic' and returns matching suggestions like 'michael' and 'michelle'. The search analyzer is standard to keep the query as a full prefix. The execution table tracks each step from indexing to returning suggestions, and the variable tracker shows how tokens and queries evolve. Key moments clarify why multiple tokens are created and why the search analyzer differs. The quiz tests understanding of token creation and matching steps.