Complete the code to define a standard analyzer in Elasticsearch.
{
"settings": {
"analysis": {
"analyzer": {
"my_standard_analyzer": {
"type": "[1]"
}
}
}
}
}The standard analyzer is the default and most common analyzer in Elasticsearch. It tokenizes text into terms on word boundaries and removes most punctuation.
Complete the code to create an index with a standard analyzer applied to a field.
{
"mappings": {
"properties": {
"content": {
"type": "text",
"analyzer": "[1]"
}
}
}
}The standard analyzer is specified as the analyzer for the 'content' field to tokenize and analyze text using the default standard analyzer.
Fix the error in the analyzer definition to use the standard analyzer correctly.
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"type": "[1]"
}
}
}
}
}The analyzer type must be lowercase standard. Elasticsearch is case-sensitive for analyzer types.
Fill both blanks to define a custom analyzer using the standard tokenizer and lowercase filter.
{
"settings": {
"analysis": {
"analyzer": {
"custom_std_lower": {
"tokenizer": "[1]",
"filter": ["[2]"]
}
}
}
}
}The standard tokenizer splits text into terms on word boundaries. The lowercase filter converts all tokens to lowercase.
Fill all three blanks to create an index with a standard analyzer and a mapping that uses it on the 'title' field.
{
"settings": {
"analysis": {
"analyzer": {
"my_std_analyzer": {
"type": "[1]"
}
}
}
},
"mappings": {
"properties": {
"title": {
"type": "text",
"analyzer": "[2]"
}
}
}
}The analyzer type is standard. The mapping uses the custom analyzer named my_std_analyzer which is defined as type standard. The analyzer name in the mapping must match the custom analyzer's name.