0
0
Elasticsearchquery~10 mins

Autocomplete with edge n-gram in Elasticsearch - Interactive Code Practice

Choose your learning style9 modes available
Practice - 5 Tasks
Answer the questions below
1fill in blank
easy

Complete the code to define an edge n-gram tokenizer named "autocomplete_tokenizer".

Elasticsearch
{
  "settings": {
    "analysis": {
      "tokenizer": {
        "autocomplete_tokenizer": {
          "type": "[1]",
          "min_gram": 1,
          "max_gram": 20
        }
      }
    }
  }
}
Drag options to blanks, or click blank then click option'
Awhitespace
Bkeyword
Cedge_ngram
Dstandard
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'standard' tokenizer which does not generate n-grams.
Using 'keyword' tokenizer which treats the whole input as one token.
2fill in blank
medium

Complete the analyzer definition to use the edge n-gram tokenizer for autocomplete.

Elasticsearch
{
  "settings": {
    "analysis": {
      "analyzer": {
        "autocomplete": {
          "type": "custom",
          "tokenizer": "[1]",
          "filter": ["lowercase"]
        }
      }
    }
  }
}
Drag options to blanks, or click blank then click option'
Astandard
Bautocomplete_tokenizer
Cwhitespace
Dkeyword
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'standard' tokenizer which does not generate n-grams.
Using 'keyword' tokenizer which does not split tokens.
3fill in blank
hard

Fix the error in the mapping to use the autocomplete analyzer for the "name" field.

Elasticsearch
{
  "mappings": {
    "properties": {
      "name": {
        "type": "text",
        "analyzer": "[1]"
      }
    }
  }
}
Drag options to blanks, or click blank then click option'
Akeyword
Bstandard
Cwhitespace
Dautocomplete
Attempts:
3 left
💡 Hint
Common Mistakes
Using 'standard' analyzer which does not support autocomplete n-grams.
Using 'keyword' analyzer which does not tokenize the field.
4fill in blank
hard

Fill both blanks to define a search analyzer that uses the standard tokenizer and lowercase filter.

Elasticsearch
{
  "settings": {
    "analysis": {
      "analyzer": {
        "search_analyzer": {
          "type": "custom",
          "tokenizer": "[1]",
          "filter": ["[2]"]
        }
      }
    }
  }
}
Drag options to blanks, or click blank then click option'
Astandard
Bautocomplete_tokenizer
Clowercase
Dkeyword
Attempts:
3 left
💡 Hint
Common Mistakes
Using the edge n-gram tokenizer for search analyzer which breaks normal search.
Omitting the lowercase filter causing case-sensitive search.
5fill in blank
hard

Fill all three blanks to complete the mapping with both autocomplete and search analyzers for the "title" field.

Elasticsearch
{
  "mappings": {
    "properties": {
      "title": {
        "type": "text",
        "analyzer": "[1]",
        "search_analyzer": "[2]",
        "fields": {
          "keyword": {
            "type": "keyword",
            "ignore_above": [3]
          }
        }
      }
    }
  }
}
Drag options to blanks, or click blank then click option'
Aautocomplete
Bsearch_analyzer
C256
Dstandard
Attempts:
3 left
💡 Hint
Common Mistakes
Mixing up analyzer and search_analyzer names.
Setting ignore_above to a string instead of a number.
Omitting the keyword subfield for exact matches.