Complete the code to define a custom analyzer named "my_analyzer" using the "standard" tokenizer.
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"type": "custom",
"tokenizer": "[1]"
}
}
}
}
}The standard tokenizer is the default and splits text into words on word boundaries. To create a custom analyzer with this tokenizer, you specify "standard" as the tokenizer.
Complete the code to add a lowercase filter to the custom analyzer "my_analyzer".
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"type": "custom",
"tokenizer": "standard",
"filter": ["[1]"]
}
}
}
}
}The lowercase filter converts all tokens to lowercase, which helps make searches case-insensitive.
Fix the error in the custom analyzer definition by completing the filter list to include both "lowercase" and "stop" filters.
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"type": "custom",
"tokenizer": "standard",
"filter": ["[1]", "[2]"]
}
}
}
}
}The analyzer needs both the lowercase filter to normalize case and the stop filter to remove common stop words.
Fill both blanks to create a custom analyzer named "folding_analyzer" that uses the "standard" tokenizer and applies both "lowercase" and "asciifolding" filters.
{
"settings": {
"analysis": {
"analyzer": {
"folding_analyzer": {
"type": "custom",
"tokenizer": "[1]",
"filter": ["[2]", "[3]"]
}
}
}
}
}This analyzer uses the standard tokenizer to split text, then applies lowercase to normalize case and asciifolding to convert accented characters to their ASCII equivalents.
Fill all four blanks to define a custom analyzer "custom_analyzer" with "whitespace" tokenizer and filters "lowercase", "stop", and "asciifolding" in that order.
{
"settings": {
"analysis": {
"analyzer": {
"custom_analyzer": {
"type": "custom",
"tokenizer": "[1]",
"filter": ["[2]", "[3]", "[4]"]
}
}
}
}
}The analyzer uses the whitespace tokenizer to split text on spaces, then applies lowercase to normalize case, stop to remove common words, and asciifolding to convert accented characters.