问题描述
当我真的知道HashSet中最大可能的元素数时,应该使用什么负载系数?我听说,默认负载系数为0.75是推荐的,因为它提供了良好的性能折中速度&空间。它是否正确 ?但是,更大的HashSet在创建和更多空间上也需要更多的时间。
What load factor should I use when I really know the maximum possible no of elements in a HashSet ? I had heard that the default load factor of 0.75 is recommended as it offers good performance trade-offs between speed & space. Is this correct ? However a larger size HashSet would also takes more time in creation and more space.
我使用HashSet只是为了从整数列表中删除重复的整数。 >
I am using HashSet just inorder to remove duplicate integers from a list of integers.
推荐答案
我花了一些时间玩负载因子一次,这是令人震惊的设置真的在实践中有多少差别。即使将其设置为高的像2.0这样的东西不会减慢太多,也不会节省那么多的内存。只是假装它不存在。 Josh经常后悔过将它作为一种选择。
I spent some time playing around with load factors once, and it is shocking how little difference that setting really makes in practice. Even setting it to something high like 2.0 doesn't slow things down much, nor does it save that much memory. Just pretend it doesn't exist. Josh has often regretted ever exposing it as an option at all.
这篇关于当您知道HashSet中最大可能的元素数时,应该使用什么负载因子的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!