Complete the code to define a standard tokenizer in an Elasticsearch analyzer.
"analyzer": { "my_analyzer": { "type": "custom", "tokenizer": "[1]" } }
The standard tokenizer splits text into terms on word boundaries, which is the default and most common tokenizer.
Complete the code to add a lowercase filter to the analyzer filters list.
"analyzer": { "my_analyzer": { "type": "custom", "tokenizer": "standard", "filter": ["[1]"] } }
The lowercase filter converts all tokens to lowercase, helping with case-insensitive search.
Fix the error in the filter list by choosing the correct filter name to remove stop words.
"analyzer": { "my_analyzer": { "type": "custom", "tokenizer": "standard", "filter": ["[1]"] } }
The stop filter removes common stop words like 'the', 'and', 'is' from the tokens.
Fill the three blanks to create an analyzer that uses the whitespace tokenizer and applies lowercase and asciifolding filters.
"analyzer": { "my_analyzer": { "type": "custom", "tokenizer": "[1]", "filter": ["[2]", "[3]"] } }
The whitespace tokenizer splits text on spaces. The lowercase filter converts tokens to lowercase, and asciifolding removes accents from characters.
Fill all four blanks to define an analyzer with the standard tokenizer and filters: lowercase, stop, and porter_stem.
"analyzer": { "my_analyzer": { "type": "custom", "tokenizer": "[1]", "filter": ["[2]", "[3]", "[4]"] } }
This analyzer uses the standard tokenizer, then applies lowercase to normalize case, stop to remove common words, and porter_stem to reduce words to their root form.