转自:http://colobu.com/2015/04/13/consistent-hash-algorithm-in-java-memcached-client/
memcached Java客户端spymemcached的一致性Hash算法
最近看到两篇文章,一个是江南白衣的陌生但默默一统江湖的MurmurHash,另外一篇是张洋的一致性哈希算法及其在分布式系统中的应用。虽然我在项目中使用memcached的java客户端spymemcached好几年了,但是对它的一致性哈希算法的细节从来没有仔细研究过。趁此机会,特别的看了一下它的源代码。
我们知道,Memcached本身没有提供分布式的功能,一般客户端会实现一致性Hash算法,根据Key值计算出应该在哪个节点进行存取。
Ketama Hash的实现
spymemcached实现了几种Hash算法:NATIVE_HASH,CRC_HASH,FNV1_64_HASH,FNV1A_64_HASH,FNV1_32_HASH,FNV1A_32_HASH,KETAMA_HASH。
相比较前几个hash算法,KETAMA HASH算法可以将服务器的虚拟节点相对均匀的分布到环上,它是一种基于MD5散列的Hash算法。
下面这个类是我精简的spymemcached的KetamaNodeLocator类,用来测试生成的虚拟节点的分布情况,它会打印出两个虚拟节点之间的间隔。 如果间隔比较均匀,我们相信使用同样的Hash算法计算的key值应该可以均匀的落在每个节点上。
spymemcached为每个节点计算虚拟节点时使用节点地址
+ "-i"格式, i最大的每个节点的虚拟节点数,默认是160个。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | package com.colobu.consistenthashing; import java.util.List; import java.util.TreeMap; public class Ketama { public TreeMap<Long, Node> hashNodes; public HashAlgorithm hashAlgorithm; protected void setKetamaNodes(List<Node> nodes) { TreeMap<Long, Node> newNodeMap = new TreeMap<Long, Node>(); int numReps = 160; for (Node node : nodes) { if (hashAlgorithm == HashAlgorithm.KETAMA_HASH) { for (int i = 0; i < numReps / 4; i++) { byte[] digest = HashAlgorithm.computeMd5(node.getName() + "-" + i); for (int h = 0; h < 4; h++) { Long k = ((long) (digest[3 + h * 4] & 0xFF) << 24) | ((long) (digest[2 + h * 4] & 0xFF) << 16) | ((long) (digest[1 + h * 4] & 0xFF) << 8) | (digest[h * 4] & 0xFF); newNodeMap.put(k, node); } } } else { for (int i = 0; i < numReps; i++) { newNodeMap.put(hashAlgorithm.hash(node + "-" + i), node); } } } hashNodes = newNodeMap; } } |
写一个测试类,看看虚拟节点的分布情况:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 | package com.colobu.consistenthashing; import java.util.ArrayList; import java.util.Iterator; import java.util.List; import java.util.Map.Entry; public class Main { public static void main(String[] args) { //System.out.println("测试 ketama hash"); //testKetama(); //System.out.println("\r\n\r\n测试 native hash"); //testHash(HashAlgorithm.NATIVE_HASH); //System.out.println("\r\n\r\n测试 CRC hash"); //max=32767 //testHash(HashAlgorithm.CRC_HASH); //System.out.println("\r\n\r\n测试 FNV1_64_HASH"); //testHash(HashAlgorithm.FNV1_64_HASH); //System.out.println("\r\n\r\n测试 FNV1A_64_HASH"); //testHash(HashAlgorithm.FNV1A_64_HASH); //System.out.println("\r\n\r\n测试 MurmurHash 32"); //testHash(HashAlgorithm.MurmurHash_32); System.out.println("\r\n\r\n测试 MurmurHash 64"); testHash(HashAlgorithm.MurmurHash_64); } private static void testHash(HashAlgorithm hash) { Ketama ketama = new Ketama(); ketama.hashAlgorithm = hash; List<Node> nodes = new ArrayList<>(); for(int i=0; i< 10; i++) { nodes.add(new Node("name-" + i)); } ketama.setKetamaNodes(nodes); Iterator<Entry<Long, Node>> it = ketama.hashNodes.entrySet().iterator(); Entry<Long, Node> prior = it.next(); while(it.hasNext()) { Entry<Long, Node> current = it.next(); System.out.println("间隔:" + (current.getKey() - prior.getKey()) + "=" + current.getKey() + "-" + prior.getKey()); prior = current; } } private static void testKetama() { Ketama ketama = new Ketama(); ketama.hashAlgorithm = HashAlgorithm.KETAMA_HASH; List<Node> nodes = new ArrayList<>(); for(int i=0; i< 10; i++) { nodes.add(new Node("name-" + i)); } ketama.setKetamaNodes(nodes); Iterator<Entry<Long, Node>> it = ketama.hashNodes.entrySet().iterator(); Entry<Long, Node> prior = it.next(); while(it.hasNext()) { Entry<Long, Node> current = it.next(); System.out.println("间隔:" + (current.getKey() - prior.getKey())); prior = current; } } } |
实际结果看到ketama算法还是不错的。
加入MurmurHash算法
江南白衣的那篇文章介绍了MurmurHash算法,开源中国社区也翻译了一篇 Hash 函数概览的科普文章。
如果我们将MurmurHash算法加入到spymemcached会怎么样呢。我没有测试它的性能,但是从分布上来看还是不错的。
网上有几个MurmurHash的实现,如Guava, Cassandra等。我不想额外引入第三方的包,所以直接复制了Viliam Holub的实现。
在HashAlgorithm算法中加入MurmurHash枚举类型。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 | package com.colobu.consistenthashing; import java.security.MessageDigest; import java.security.NoSuchAlgorithmException; import java.util.zip.CRC32; public enum HashAlgorithm { /** * Native hash (String.hashCode()). */ NATIVE_HASH, /** * CRC_HASH as used by the perl API. This will be more consistent both * across multiple API users as well as java versions, but is mostly likely * significantly slower. */ CRC_HASH, /** * FNV hashes are designed to be fast while maintaining a low collision rate. * The FNV speed allows one to quickly hash lots of data while maintaining a * reasonable collision rate. * * @see <a href="http://www.isthe.com/chongo/tech/comp/fnv/">fnv * comparisons</a> * @see <a href="http://en.wikipedia.org/wiki/Fowler_Noll_Vo_hash">fnv at * wikipedia</a> */ FNV1_64_HASH, /** * Variation of FNV. */ FNV1A_64_HASH, /** * 32-bit FNV1. */ FNV1_32_HASH, /** * 32-bit FNV1a. */ FNV1A_32_HASH, MurmurHash_32, MurmurHash_64, /** * MD5-based hash algorithm used by ketama. */ KETAMA_HASH; private static final long FNV_64_INIT = 0xcbf29ce484222325L; private static final long FNV_64_PRIME = 0x100000001b3L; private static final long FNV_32_INIT = 2166136261L; private static final long FNV_32_PRIME = 16777619; private static MessageDigest md5Digest = null; static { try { md5Digest = MessageDigest.getInstance("MD5"); } catch (NoSuchAlgorithmException e) { throw new RuntimeException("MD5 not supported", e); } } /** * Compute the hash for the given key. * * @return a positive integer hash */ public long hash(final String k) { long rv = 0; int len = k.length(); switch (this) { case NATIVE_HASH: rv = k.hashCode(); break; case CRC_HASH: // return (crc32(shift) >> 16) & 0x7fff; CRC32 crc32 = new CRC32(); crc32.update(k.getBytes()); rv = (crc32.getValue() >> 16) & 0x7fff; break; case FNV1_64_HASH: // Thanks to [email protected] for the pointer rv = FNV_64_INIT; for (int i = 0; i < len; i++) { rv *= FNV_64_PRIME; rv ^= k.charAt(i); } break; case FNV1A_64_HASH: rv = FNV_64_INIT; for (int i = 0; i < len; i++) { rv ^= k.charAt(i); rv *= FNV_64_PRIME; } break; case FNV1_32_HASH: rv = FNV_32_INIT; for (int i = 0; i < len; i++) { rv *= FNV_32_PRIME; rv ^= k.charAt(i); } break; case FNV1A_32_HASH: rv = FNV_32_INIT; for (int i = 0; i < len; i++) { rv ^= k.charAt(i); rv *= FNV_32_PRIME; } break; case MurmurHash_32: rv = MurmurHash.hash32(k); break; case MurmurHash_64: rv = MurmurHash.hash64(k); break; case KETAMA_HASH: byte[] bKey = computeMd5(k); rv = ((long) (bKey[3] & 0xFF) << 24) | ((long) (bKey[2] & 0xFF) << 16) | ((long) (bKey[1] & 0xFF) << 8) | (bKey[0] & 0xFF); break; default: assert false; } return rv & 0xffffffffL; /* Truncate to 32-bits */ } /** * Get the md5 of the given key. */ public static byte[] computeMd5(String k) { MessageDigest md5; try { md5 = (MessageDigest) md5Digest.clone(); } catch (CloneNotSupportedException e) { throw new RuntimeException("clone of MD5 not supported", e); } md5.update(k.getBytes()); return md5.digest(); } } |
实际结果看MurmurHash也是相当的均匀。
xmemcached的实现
xmemcached是另外一个memcached java客户端,它实现了类似spymemcached的hash算法。只不过增加了几种新的hash算法:MYSQL_HASH,ELF_HASH,RS_HASH,LUA_HASH,ONE_AT_A_TIME。
Twemproxy
Twemproxy是一个Memcahced的网关程序。 它实现了下面几种Hash算法。
- one_at_a_time
- md5
- crc16
- crc32 (crc32 implementation compatible with libmemcached)
- crc32a (correct crc32 implementation as per the spec)
- fnv1_64
- fnv1a_64
- fnv1_32
- fnv1a_32
- hsieh
- murmur
- jenkins