保险起见,不能直接贴出出现问题的业务代码,因此将该问题简化成如下代码:
ConcurrentHashMap<Integer, Integer> map = new ConcurrentHashMap<>(); // map默认capacity 16,当元素个数达到(capacity - capacity >> 2) = 12个时会触发rehash for (int i = 0; i < 11; i++) { map.put(i, i); } map.computeIfAbsent(12, (k) -> { // 这里会导致死循环 :( map.put(100, 100); return k; }); // 其他操作
感兴趣的小伙伴可以在电脑上运行下,话不说多,先说下问题原因:当执行computeIfAbsent
时,如果key对应的slot为空,此时会创建ReservationNode
对象(hash值为RESERVED=-3
)放到当前slot位置,然后调用mappingFunction.apply(key)
生成value,根据value创建Node之后赋值到slow位置,此时完成computeIfAbsent
流程。但是上述代码mappingFunction
中又对该map进行了一次put操作,并且触发了rehash操作,在transfer
中遍历slot数组时,依次判断slot对应Node是否为null、hash值是否为MOVED=-1、hash值否大于0(list结构)、Node类型是否是TreeBin(红黑树结构),唯独没有判断hash值为RESERVED=-3
的情况,因此导致了死循环问题。
问题分析到这里,原因已经很清楚了,当时我们认为,这可能是jdk的“bug”
,因此我们最后给出的解决方案是:
- 如果在rehash时出现了
slot
节点类型是ReservationNode
,可以给个提示,比如抛异常; - 理论上来说,
mappingFunction
中不应该再对当前map进行更新操作了,但是jdk并没有禁止不能这样用,最好说明下。
最后,另一个朋友看了computeIfAbsent
的注释:
1 /** 2 * If the specified key is not already associated with a value, 3 * attempts to compute its value using the given mapping function 4 * and enters it into this map unless {@code null}. The entire 5 * method invocation is performed atomically, so the function is 6 * applied at most once per key. Some attempted update operations 7 * on this map by other threads may be blocked while computation 8 * is in progress, so the computation should be short and simple, 9 * and must not attempt to update any other mappings of this map. 10 */ 11 public V computeIfAbsent(K key, Function<? super K, ? extends V> mappingFunction)
我们发现,其实人家已经知道了这个问题,还特意注释说明了。。。我们还是too yong too simple
啊。至此,ConcurrentHashMap死循环问题告一段落,还是要遵循编码规范,不要在mappingFunction
中再对当前map进行更新操作。其实ConcurrentHashMap死循环不仅仅出现在上述讨论的场景中,以下场景也会触发,原因和上述讨论的是一样的,代码如下,感兴趣的小伙伴也可以本地跑下:
1 ConcurrentHashMap<Integer, Integer> map = new ConcurrentHashMap<>(); 2 map.computeIfAbsent(12, (k) -> { 3 map.put(k, k); 4 return k; 5 }); 6 7 System.out.println(map); 8 // 其他操作
最后,一起跟着computeIfAbsent源码来分下上述死循环代码的执行流程,限于篇幅,只分析下主要流程代码:
1 public V computeIfAbsent(K key, Function<? super K, ? extends V> mappingFunction) { 2 if (key == null || mappingFunction == null) 3 throw new NullPointerException(); 4 int h = spread(key.hashCode()); 5 V val = null; 6 int binCount = 0; 7 for (Node<K,V>[] tab = table;;) { 8 Node<K,V> f; int n, i, fh; 9 if (tab == null || (n = tab.length) == 0) 10 tab = initTable(); 11 else if ((f = tabAt(tab, i = (n - 1) & h)) == null) { 12 Node<K,V> r = new ReservationNode<K,V>(); 13 synchronized (r) { 14 // 这里使用synchronized针对局部对象意义不大,主要是下面的cas操作保证并发问题 15 if (casTabAt(tab, i, null, r)) { 16 binCount = 1; 17 Node<K,V> node = null; 18 try { 19 // 这里的value返回可能为null呦 20 if ((val = mappingFunction.apply(key)) != null) 21 node = new Node<K,V>(h, key, val, null); 22 } finally { 23 setTabAt(tab, i, node); 24 } 25 } 26 } 27 if (binCount != 0) 28 break; 29 } 30 else if ((fh = f.hash) == MOVED) 31 tab = helpTransfer(tab, f); 32 else { 33 boolean added = false; 34 synchronized (f) { 35 // 仅仅判断了node.hash >=0和node为TreeBin类型情况,未判断`ReservationNode`类型 36 // 扩容时判断和此处类似 37 if (tabAt(tab, i) == f) { 38 if (fh >= 0) { 39 binCount = 1; 40 for (Node<K,V> e = f;; ++binCount) { 41 K ek; V ev; 42 if (e.hash == h && 43 ((ek = e.key) == key || 44 (ek != null && key.equals(ek)))) { 45 val = e.val; 46 break; 47 } 48 Node<K,V> pred = e; 49 if ((e = e.next) == null) { 50 if ((val = mappingFunction.apply(key)) != null) { 51 added = true; 52 pred.next = new Node<K,V>(h, key, val, null); 53 } 54 break; 55 } 56 } 57 } 58 else if (f instanceof TreeBin) { 59 binCount = 2; 60 TreeBin<K,V> t = (TreeBin<K,V>)f; 61 TreeNode<K,V> r, p; 62 if ((r = t.root) != null && 63 (p = r.findTreeNode(h, key, null)) != null) 64 val = p.val; 65 else if ((val = mappingFunction.apply(key)) != null) { 66 added = true; 67 t.putTreeVal(h, key, val); 68 } 69 } 70 } 71 } 72 if (binCount != 0) { 73 if (binCount >= TREEIFY_THRESHOLD) 74 treeifyBin(tab, i); 75 if (!added) 76 return val; 77 break; 78 } 79 } 80 } 81 if (val != null) 82 // 计数统计&阈值判断+扩容操作 83 addCount(1L, binCount); 84 return val; 85 }
推荐阅读:
更多文章可扫描以下二维码: