本文介绍了使用Ruby / Erlang迭代生成排序而不进行递归或堆栈的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我想生成列表的所有排列,但是我想在将它们添加到堆栈或存储在任何地方之前过滤出一些排列。
我将根据一些自定义特殊规则过滤出排列。
换句话说,我想生成一个大列表(50-300个元素)的排列列表,但是我想抛出大部分生成的在进行过程中的排列(我知道整数排列是 N!
)。
Ruby的 Array.permutation.to_a
,但它看起来像在执行期间维护一个完整的堆栈,所以我的内存(8 GB)相当快地耗尽。
我也尝试过 Erlang解决方案,但是它似乎与以前的Ruby相似。
有没有任何定制解决方案这个问题?
PS我已阅读和,但不幸的是我不知道C / C ++。
div class =h2_lin>解决方案 Ruby的 Array.permutation.to_a
确实产生了一个数组。不要使用 to_a
然后!这意味着数组。 Array.permutation
给你一个枚举器,一个生成器。由于您想要排除大部分排列,请使用拒绝
。
res = [1,2,3,4] .permutation(3).reject do | perm |
perm.first.even? #if这一行是真的,烫发将被拒绝
end
将生成所有排列的三个要素,并拒绝那些偶数的第一个位置。但欧姆...你看过多少300!是吗?
I would like to generate all permutations of a list, but I would like to filter out some of the permutations before they are added to the stack or stored anywhere.
I will filter out the permutations based on some custom ad-hoc rules.
In other words, I would like to generate a list of permutations of a large list (50-300 elements), but I would like to throw out most of the generated permutations right during the process (I know that the full number of permutations is N!
).
I have tried Ruby with its Array.permutation.to_a
, but it looks like it maintains a full stack during execution, so I ran out of memory (8 GB) fairly quickly.
I have also tried this Erlang solution, but it seems to perform similar to the previous Ruby one.
Are there any custom solutions to this problem?
P.S. I have read this and this, but unfortunately I do not know C/C++.
解决方案
Ruby's Array.permutation.to_a
does indeed produce an array. Don't use to_a
then! It means 'to array'. Array.permutation
gives you an 'enumerator', a generator. Since you want to throw out most permutations, use reject
on it.
res = [1,2,3,4].permutation(3).reject do |perm|
perm.first.even? #if this line is true, the perm will be rejected
end
will generate all permutations of three elements and reject those with an even number on the first position. But ehm... have you seen how much 300! is?
这篇关于使用Ruby / Erlang迭代生成排序而不进行递归或堆栈的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!