0
0
Elasticsearchquery~10 mins

Testing analyzers (_analyze API) in Elasticsearch - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to analyze the text "Quick Brown Fox" using the standard analyzer.

Elasticsearch
{
  "analyzer": "[1]",
  "text": "Quick Brown Fox"
}
Drag options to blanks, or click blank then click option'
Awhitespace
Bstandard
Csimple
Dkeyword
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'keyword' analyzer which does not tokenize the text.
Using 'whitespace' analyzer which only splits on spaces but does not lowercase.
2fill in blank
medium

Complete the code to analyze the text "Running runner runs" using the English stemmer analyzer.

Elasticsearch
{
  "tokenizer": "standard",
  "filter": ["[1]"],
  "text": "Running runner runs"
}
Drag options to blanks, or click blank then click option'
Alowercase
Benglish_stemmer
Cstop
Dporter_stem
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'english_stemmer' which is not a default filter name.
Using 'stop' filter which removes stop words instead of stemming.
3fill in blank
hard

Fix the error in the analyzer definition to correctly lowercase the text "HELLO WORLD".

Elasticsearch
{
  "tokenizer": "standard",
  "filter": ["[1]"],
  "text": "HELLO WORLD"
}
Drag options to blanks, or click blank then click option'
Alowercase
Buppercase
Cstop
Dasciifolding
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'uppercase' filter which does the opposite.
Using 'stop' filter which removes common words but does not change case.
4fill in blank
hard

Fill both blanks to analyze the text "Cats running" with a custom analyzer that uses the standard tokenizer and the lowercase and porter_stem filters.

Elasticsearch
{
  "tokenizer": "[1]",
  "filter": ["[2]", "porter_stem"],
  "text": "Cats running"
}
Drag options to blanks, or click blank then click option'
Astandard
Bkeyword
Clowercase
Dstop
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'keyword' tokenizer which does not split text.
Omitting the lowercase filter which causes stemming to fail on uppercase tokens.
5fill in blank
hard

Fill all three blanks to create an analyzer that tokenizes text on whitespace, lowercases tokens, and removes English stop words.

Elasticsearch
{
  "tokenizer": "[1]",
  "filter": ["[2]", "[3]"],
  "text": "The quick brown fox jumps over the lazy dog"
}
Drag options to blanks, or click blank then click option'
Astandard
Bwhitespace
Clowercase
Dstop
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'standard' tokenizer which splits on punctuation too.
Omitting the stop filter which leaves stop words in tokens.