Consider an Elasticsearch cluster with 3 nodes and 5 primary shards per index. You run the following API call:
GET /_cluster/health
What key value will the status field most likely show if all nodes and shards are functioning correctly?
GET /_cluster/health
Think about what the cluster health colors mean in Elasticsearch.
In Elasticsearch, green means all primary and replica shards are allocated and the cluster is fully functional. Yellow means all primary shards are allocated but some replicas are not. Red means some primary shards are unallocated. Blue is not a valid status.
You create an index in Elasticsearch with 4 primary shards and set the number of replicas to 2. How many total shards will this index have?
Remember that replicas are copies of primary shards.
Total shards = primary shards + (primary shards × number of replicas). Here, 4 + (4 × 2) = 12 shards.
Given the following Elasticsearch node setting snippet, what error will occur when starting the node?
node.attr.zone: us-east-1a node.attr.zone: us-east-1b
Check if the same key is repeated in the configuration file.
YAML or properties files do not allow duplicate keys. Defining 'node.attr.zone' twice causes a syntax error during parsing.
Assume you have a cluster with nodes tagged by zone attribute. You run this allocation filtering query:
GET /_cluster/allocation/explain
{
"index": "myindex",
"shard": 0,
"primary": true
}The node has node.attr.zone: us-west-2a but the index has allocation filtering set to include.zone: us-east-1a. What will the explanation say about shard allocation?
Think about how allocation filtering works with node attributes.
Allocation filtering restricts shards to nodes with matching attributes. Since the node's zone does not match the include filter, the shard cannot be allocated there.
When a new node joins an Elasticsearch cluster, how does the cluster decide to rebalance shards across nodes?
Consider Elasticsearch's automatic cluster management features.
Elasticsearch automatically rebalances shards to spread load and disk usage evenly when nodes join or leave the cluster.