Reroute Unassigned Shards
2,455 words in 15 minutes
There are 3 cluster states:
- green: All primary and replica shards are active
- yellow: All primary shards are active, but not all replica shards are active
- red: Not all primary shards are active
When cluster health is red, it means cluster is dead. And that means you can do nothing until it’s recovered, which is very bad indeed. I will share with you how to deal with one common situation: when cluster is red due to unassigned shards.
The general idea is pretty simple: find those shards which are unassigned, manually assign them to a node with reroute API. Let’s see how we can do that step by step. Then we can combine them into a configurable simple script.
Step 1: Check Unassigned Shards
To get cluster information, we usually use cat APIs. There is a
GET /_cat/shards endpoint to show a detailed view of what nodes contain which shards.
By piping cat shards to fgrep, we can get all unassigned shards.
If you don’t want to deal with shell script, you can also find these unassigned shards using another endpoint
POST /_flush/synced. This endpoint is actually not just some information. It allows an administrator to initiate a synced flush manually. This can be particularly useful for a planned (rolling) cluster restart where you can stop indexing and don’t want to wait the default 5 minutes for idle indices to be sync-flushed automatically. It returns with a json response.
If there are failed shards in the response, we can iterate through a failures array to get all unassigned ones.
Step 2: Reroute
The reroute command allows to explicitly execute a cluster reroute allocation command including specific commands . An unassigned shard can be explicitly allocated on a specific node.
There are 3 kinds of commands you can use:
move: Move a started shard from one node to another node. Accepts index and shard for index name and shard number, from_node for the node to move the shard from, and to_node for the node to move the shard to.
cancel: Cancel allocation of a shard (or recovery). Accepts index and shard for index name and shard number, and node for the node to cancel the shard allocation on. It also accepts allow_primary flag to explicitly specify that it is allowed to cancel allocation for a primary shard. This can be used to force resynchronization of existing replicas from the primary shard by cancelling them and allowing them to be reinitialized through the standard reallocation process.
allocate: Allocate an unassigned shard to a node. Accepts the index and shard for index name and shard number, and node to allocate the shard to. It also accepts allow_primary flag to explicitly specify that it is allowed to explicitly allocate a primary shard (might result in data loss).
Combining step 2 with the unassigned shards from Step 1, we can reroute all unassigned shards 1 by 1, thus getting faster cluster recovery from red state.
Below is a python script I wrote using
POST /_flush/synced and
Below is a shell script I found elsewhere in a blog post
EDIT: Based on Vincent’s comment I updated the shell script:
Possible Unassigned Shard Reasons
FYI, these are the possible reasons for a shard be in a unassigned state:
|INDEX_CREATED||Unassigned as a result of an API creation of an index|
|CLUSTER_RECOVERED||Unassigned as a result of a full cluster recovery|
|INDEX_REOPENED||Unassigned as a result of opening a closed index|
|DANGLING_INDEX_IMPORTED||Unassigned as a result of importing a dangling index|
|NEW_INDEX_RESTORED||Unassigned as a result of restoring into a new index|
|EXISTING_INDEX_RESTORED||Unassigned as a result of restoring into a closed index|
|REPLICA_ADDED||Unassigned as a result of explicit addition of a replica|
|ALLOCATION_FAILED||Unassigned as a result of a failed allocation of the shard|
|NODE_LEFT||Unassigned as a result of the node hosting it leaving the cluster|
|REROUTE_CANCELLED||Unassigned as a result of explicit cancel reroute command|
|REINITIALIZED||When a shard moves from started back to initializing, for example, with shadow replicas|
|REALLOCATED_REPLICA||A better replica location is identified and causes the existing replica allocation to be cancelled|