执行无限循环的时序

执行无限循环的时序

本文介绍了解释导致 HashMap.put() 执行无限循环的时序的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

正如许多人所注意到和遇到的,HashMap.put 在并发使用时会进入无限执行循环(参见 GRIZZLY-1207, JGRP-525,可能是HHH-6414,还有这个答案).

As a number of people have noted and encountered HashMap.put can go into an infinite execution loop when used concurrently (see GRIZZLY-1207, JGRP-525, possibly HHH-6414, and this SO answer).

HashMap 明确记录为非线程安全.显然,正确的解决方法是使用 Map,特别是 ConncurrentHashMap 的线程安全实现.我对导致无限循环的并发时间比较好奇.我最近使用 Java 7 JRE 遇到了这个循环,并想了解确切原因.比如,这是不是同时多个puts造成的?

HashMap is clearly documented as not thread safe. Obviously, the correct fix is to use a thread-safe implementation of Map, ConncurrentHashMap in particular. I'm more curious about the concurrent timing that causes the infinite loop. I encountered this loop recently with a Java 7 JRE and would like to understand the exact causes. For example, is this caused by multiple puts at the same time?

深入了解 HashMap.put 显示 HashMap.Entry 包含到下一个节点的链接(在桶?).我假设这些链接正在损坏以包含循环引用,这会导致无限循环.但是,我仍然不明白这种腐败是如何发生的.

A look inside HashMap.put shows that HashMap.Entry contains a link to the next node (in the bucket?). I assume these links are getting corrupting to contain circular references, which is causing the infinite loop. However, I still don't understand exactly how that corruption is occurring.

推荐答案

与很多人的想法相反,multi-threadingHashMaps 的主要问题是不仅仅是一个重复的条目或一个消失的条目......正如您所说,当两个或多个 Threads 同时决定调整 HashMap 的大小时,可能会发生无限循环.

To the contrary of what many people think, the main issue with multi-threading and HashMaps is not just a duplicate entry or a vanishing one... As you said, an infinite loop might occur when two or multiple Threads concurrently decide to resize the HashMap.

如果 HashMap 的大小超过给定的阈值,那么多个线程可能最终会同时尝试调整它的大小,如果我们足够幸运(您已经在生产中部署了代码),它们将永远持续下去..

If the size of the HashMap passes a given threshold, several threads might end up trying to resize it at the same time, and if we are lucky enough (you deployed the code in production already) they will keep going forever...

问题是由void resize(int newCapacity);void transfer(Entry[] newTable);的实现方式引起的,你可以看看在 openjdk 源代码中.运气不好,时机好,条目颠倒(此数据结构中不需要排序)以及最终在线程继续运行时错误地相互引用while(e != null)...

The issue is caused by the way the void resize(int newCapacity); and void transfer(Entry[] newTable); are implemented, you can take a look at the openjdk source code yourself. A mix of bad luck, good timing, entries that get reversed (ordering is not required in this data structure) and that end up to mistakenly referring to each other while a thread keep going while(e != null)...

虽然我可以试着自己给你一个解释,但我想感谢 Paul Tyma 的帖子(无论如何,我不能比他做得更好)在我第一次决定弄清楚为什么几个月前我没有被录用时,我了解到这是如何工作的......

While I could try to give you an explanation myself, I want to give credit to Paul Tyma's post (I cannot do better than him anyway) where I learned how this worked the first time I decided to figure out why I wasn't hired for a job several months ago...

http://mailinator.blogspot.com/2009/06/beautiful-race-condition.html

正如保罗所说,描述这场比赛最好的词是条件:beautiful

As Paul says, the best word to describe this race is condition is: beautiful

这篇关于解释导致 HashMap.put() 执行无限循环的时序的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

07-30 02:48