问题描述
我要排序的400万量级的长长
S IN C.通常我只想的malloc()
一个缓冲区作为数组和呼叫的qsort()
,但400万* 8个字节的一个巨大的连续内存的块使用。
I want to sort on the order of four million long long
s in C. Normally I would just malloc()
a buffer to use as an array and call qsort()
but four million * 8 bytes is one huge chunk of contiguous memory.
什么是做最简单的方法?我率之地的纯速度这一点。我想preFER不使用任何库,结果将需要Windows和Linux下的适度上网本运行。
What's the easiest way to do this? I rate ease over pure speed for this. I'd prefer not to use any libraries and the result will need to run on a modest netbook under both Windows and Linux.
推荐答案
只是分配一个缓冲区,并调用的qsort
。 32MB不是那么非常大的,即使这几天温和的上网本。
Just allocate a buffer and call qsort
. 32MB isn't so very big these days even on a modest netbook.
如果你真的必须拆分它:排序更小的块,并将其写入文件,并将其合并(合并接管各的事情被合并单一线性调整)。但是,真的,不要。只是排序。
If you really must split it up: sort smaller chunks, write them to files, and merge them (a merge takes a single linear pass over each of the things being merged). But, really, don't. Just sort it.
(有一个在克努特,第2卷,其中它被称为外部排序,当克努特正在写的是,外部数据本来在磁带上,但原则AREN的排序和合并方法的一个很好的讨论'吨,盘很大的不同:。你还想要你的I / O是为顺序尽可能权衡与固态硬盘的不同位)
(There's a good discussion of the sort-and-merge approach in volume 2 of Knuth, where it's called "external sorting". When Knuth was writing that, the external data would have been on magnetic tape, but the principles aren't very different with discs: you still want your I/O to be as sequential as possible. The tradeoffs are a bit different with SSDs.)
这篇关于如何排序在C非常大阵的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!