列表排序实现得这么快

列表排序实现得这么快

本文介绍了是什么让海湾合作委员会的std ::列表排序实现得这么快?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

限时删除!!

我有一个链表实现,我尝试用两种归并和快速排序算法。

I have a linked list implementation and I'm experimenting with both Mergesort and QuickSort algorithms.

我不明白的是为什么的std ::列表排序操作是如此之快。看着linux下的std ::名单,这似乎是链表为好,而不是基于阵列的列表。

What I don't understand is why the sort operation in std::list is so fast.Looking at the std::list under linux and it appears to be linked list as well, not an array based list.

合并排序我试图几乎相同的戴夫宝洁的版本在这里:合并排序链表

The merge sort I tried almost identical to Dave Gamble's version here:Merge Sort a Linked List

另外,我想我会尝试在此基础上code一个简单的快速排序: HTTP://www.flip$c$c.com/archives/Quick_Sort_On_Linked_List.shtml

Also, I thought I'd try a simple quicksort based on this code:http://www.flipcode.com/archives/Quick_Sort_On_Linked_List.shtml

令人惊奇的是,使用的std ::列表千万随机数进行排序,排序大约是10倍比任何其他人更快。

Suprisingly, to sort 10 million random numbers using std::list and sort was around 10 times quicker than either of those others.

而对于那些要求,是的,我需要用我自己的列表类为这个项目。

And for those that are asking, yes I need to use my own list class for this project.

推荐答案

我一直在服用看看有趣的glibc实现的(的),它似乎并没有实现一个传统的合并排序算法(至少不是一个我曾经见过)。

I've been taking a look at the interesting GLibC implementation for list::sort (source code) and it doesn't seem to implement a traditional merge sort algorithm (at least not one I've ever seen before).

基本上它是:

  1. 在创建一系列桶(64个)。
  2. 删除列表进行排序的第一个元素,并与第一( I = 0 日)斗将其合并。
  3. 如果,合并前,日斗不为空,合并个桶的 I + 1 个斗。
  4. 重复步骤3,直到我们有一个空水桶合并。
  5. 重复步骤2和3,直到列表进行排序是空的。
  6. 合并所有剩余的非空水桶一起从小开始大。
  1. Creates a series of buckets (64 total).
  2. Removes the first element of the list to sort and merges it with the first (i=0th) bucket.
  3. If, before the merge, the ith bucket is not empty, merge the ith bucket with the i+1th bucket.
  4. Repeat step 3 until we merge with an empty bucket.
  5. Repeat step 2 and 3 until the list to sort is empty.
  6. Merge all the remaining non-empty buckets together starting from smallest to largest.

小记:合并水桶 X 用水桶将删除桶中的所有元素 X 键,将它们添加到斗,同时保持一切来分类的。还要注意的是元素的内桶的数量可以是 0 2 ^ I

Small note: merging a bucket X with a bucket Y will remove all the elements from bucket X and add them to bucket Y while keeping everything sorted. Also note that the number of elements within a bucket is either 0 or 2^i.

现在这是为什么快则traditionnal合并排序?那么我可以肯定地说,但这里有想到的几件事情:

Now why is this faster then a traditionnal merge sort? Well I can't say for sure but here are a few things that comes to mind:

  • 在它从来没有遍历列表,找到一个中点这也使得算法更多的缓存友好。
  • 因为早期水桶都很小,使用更频繁,调用合并垃圾缓存较少。
  • 在编译器能够更好地优化此实现。需要生成的程序集进行比较,以确保这一点。
  • It never traverses the list to find a mid-point which also makes the algorithm more cache friendly.
  • Because the earlier buckets are small and used more frequently, the calls to merge trash the cache less frequently.
  • The compiler is able to optimize this implementation better. Would need to compare the generated assembly to be sure about this.

我是pretty的确定谁执行这个算法的人彻底地测试它,所以如果你想要一个明确的答案,你可能要问了GCC邮件列表。

I'm pretty sure the folks who implemented this algorithm tested it thoroughly so if you want a definitive answer you'll probably have to ask on the GCC mailing list.

这篇关于是什么让海湾合作委员会的std ::列表排序实现得这么快?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

1403页,肝出来的..

09-06 09:43