0
0
Elasticsearchquery~10 mins

Analyzer components (tokenizer, filters) in Elasticsearch - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to define a standard tokenizer in an Elasticsearch analyzer.

Elasticsearch
"analyzer": { "my_analyzer": { "type": "custom", "tokenizer": "[1]" } }
Drag options to blanks, or click blank then click option'
Astandard
Bkeyword
Cwhitespace
Dpattern
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'keyword' tokenizer which does not split text.
Using 'whitespace' tokenizer which only splits on spaces.
2fill in blank
medium

Complete the code to add a lowercase filter to the analyzer filters list.

Elasticsearch
"analyzer": { "my_analyzer": { "type": "custom", "tokenizer": "standard", "filter": ["[1]"] } }
Drag options to blanks, or click blank then click option'
Alowercase
Basciifolding
Cuppercase
Dstop
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'stop' filter which removes common words.
Using 'uppercase' filter which converts tokens to uppercase.
3fill in blank
hard

Fix the error in the filter list by choosing the correct filter name to remove stop words.

Elasticsearch
"analyzer": { "my_analyzer": { "type": "custom", "tokenizer": "standard", "filter": ["[1]"] } }
Drag options to blanks, or click blank then click option'
Aporter_stem
Blowercase
Ckeyword_marker
Dstop
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'lowercase' filter which only changes case.
Using 'porter_stem' which reduces words to their root form.
4fill in blank
hard

Fill the three blanks to create an analyzer that uses the whitespace tokenizer and applies lowercase and asciifolding filters.

Elasticsearch
"analyzer": { "my_analyzer": { "type": "custom", "tokenizer": "[1]", "filter": ["[2]", "[3]"] } }
Drag options to blanks, or click blank then click option'
Awhitespace
Blowercase
Casciifolding
Dstop
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'standard' tokenizer instead of 'whitespace'.
Mixing up filter order or names.
5fill in blank
hard

Fill all four blanks to define an analyzer with the standard tokenizer and filters: lowercase, stop, and porter_stem.

Elasticsearch
"analyzer": { "my_analyzer": { "type": "custom", "tokenizer": "[1]", "filter": ["[2]", "[3]", "[4]"] } }
Drag options to blanks, or click blank then click option'
Astandard
Blowercase
Cstop
Dporter_stem
Attempts:
3 left
💡 Hint
Common Mistakes
Wrong tokenizer name.
Incorrect filter order or missing filters.