Consistent hashing

Motivation

When it comes to data partitioning, aka distributing data across a set of nodes, there are a couple of challenges:

  • How do we know on which node a particular piece of data will be stored?

  • When we add or remove nodes, how do we know what data will be moved from existing nodes to the new nodes? Additionally, how can we minimize data movement when nodes join or leave?

The naive approach of using a hash function (like modulo) that maps the data to a server number would be hard to maintain. When we add or remove a server, we have to remap all the keys and move our data based on the new server count, which can get messy quite quickly.

Solution

Consistent Hashing maps data to physical nodes and ensures that only a small set of keys move when servers are added or removed by storing the data managed by a distributed system in a ring. Each node in the ring is assigned a range of data.

With consistent hashing, the ring is divided into smaller, predefined ranges. Each node is assigned one of these ranges. The start of the range is called a token. This means that each node will be assigned one token. The range assigned to each node is computed as follows:

Range start: Token value Range end: Next token value - 1

The nodes from the above example would have the following tokens:

ServerTokenRange StartRange End

Server 1

1

1

25

Server 2

26

26

50

Server 3

51

51

75

Server 4

76

76

100

Adding/Reading data

Whenever the system needs to read or write data:

  1. the first step it performs is to apply the MD5 hashing algorithm to the key

  2. the output of this hashing algorithm determines within which range the data lies and hence, on which node the data will be stored.

Adding/Removing Nodes

  • When adding a new node, only the next node is affected.

  • When a node is removed, the next node becomes responsible for all of the keys stored on the outgoing node.

Issues:

  • non uniform data and load distribution

  • adding or removing nodes - will result in recomputing the tokens causing a significant administrative overhead for a large cluster.

  • hotspots - since each node is assigned one large range, if the data is not evenly distributed, some nodes can become hotspots.

  • node rebuilding - since each node’s data might be replicated (for fault-tolerance) on a fixed number of other nodes, when we need to rebuild a node, only its replica nodes can provide the data. This puts a lot of pressure on the replica nodes and can lead to service degradation.

Instead of assigning a single token to a node, the hash range is divided into multiple smaller ranges, and each physical node is assigned several of these smaller ranges. Each of these subranges is considered a virtual node (vNode).

Vnodes are randomly distributed across the cluster and are generally non-contiguous so that no two neighboring Vnodes are assigned to the same physical node or rack.

Advantages:

  1. As Vnodes help spread the load more evenly across the physical nodes on the cluster by dividing the hash ranges into smaller subranges, this speeds up the rebalancing process after adding or removing nodes. When a new node is added, it receives many Vnodes from the existing nodes to maintain a balanced cluster. Similarly, when a node needs to be rebuilt, instead of getting data from a fixed number of replicas, many nodes participate in the rebuild process.

  2. Vnodes make it easier to maintain a cluster containing heterogeneous machines. This means, with Vnodes, we can assign a high number of sub-ranges to a powerful server and a lower number of sub-ranges to a less powerful server.

  3. In contrast to one big range, since Vnodes help assign smaller ranges to each physical node, this decreases the probability of hotspots.

Applications

Dynamo and Cassandra use Consistent Hashing to distribute their data across nodes.

Last updated