本文介绍了python中的慢递归的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我知道这个主题已经被很好地讨论过了,但是我遇到了一个案例,我并不真正了解递归方法比使用"reduce,lambda,xrange"的方法更慢".

I know this subject is well discussed but I've come around a case I don't really understand how the recursive method is "slower" than a method using "reduce,lambda,xrange".

def factorial2(x, rest=1):
    if x <= 1:
        return rest
    else:
        return factorial2(x-1, rest*x)


def factorial3(x):
    if x <= 1:
        return 1
    return reduce(lambda a, b: a*b, xrange(1, x+1))

我知道python不能优化尾递归,所以问题不关乎它.据我了解,生成器仍应使用+1运算符生成n个数量的数字.因此,从技术上讲,fact(n)应该加一个数字n倍,就像递归一样.与递归方法一样,reduce中的lambda将被调用n次.因此,由于在两种情况下都没有尾部调用优化,因此将创建/销毁堆栈并返回n次.生成器中的if应该检查何时引发StopIteration异常.

I know python doesn't optimize tail recursion so the question isn't about it. To my understanding, a generator should still generate n amount of numbers using the +1 operator. So technically, fact(n) should add a number n times just like the recursive one. The lambda in the reduce will be called n times just as the recursive method... So since we don't have tail call optimization in both case, stacks will be created/destroyed and returned n times. And a if in the generator should check when to raise a StopIteration exception.

这使我想知道为什么递归方法仍然比另一种方法慢,因为递归方法使用简单算术而不使用生成器.

This makes me wonder why does the recursive method still slowlier than the other one since the recursive one use simple arithmetic and doesn't use generators.

在一个测试中,我在递归方法中将rest*x替换为x,所花费的时间与使用reduce的方法相比减少了.

In a test, I replaced rest*x by x in the recursive method and the time spent got reduced on par with the method using reduce.

这是我对事实(400)的计时,是1000次

Here are my timings for fact(400), 1000 times

factorial2:1.79896998405

factorial2 : 1.79896998405

使方法从1n开始也无济于事,而不是从n1.因此,-1不会造成开销.

Making the method start from 1 to n doesn't help either instead of n to 1. So not an overhead with the -1.

此外,我们可以使递归方法更快吗?我尝试了多种事情,例如可以更改的全局变量...使用可变上下文,方法是将变量放在可以像数组一样进行修改的单元格中,并保留不带参数的递归方法.将用于递归的方法作为参数发送,这样我们就不必在我们的范围内取消引用"它了……?但是没有什么可以使它更快.

Also, can we make the recursive method faster. I tried multiple things like global variables that I can change... Using a mutable context by placing variables in cells that I can modify like an array and keep the recursive method without parameters. Sending the method used for recursion as a parameter so we don't have to "dereference" it in our scope...?! But nothings makes it faster.

我将指出一个事实,即使用forloop比这两种方法都快得多的事实,因此显然还有改进的空间,但我不希望有什么比forloop更快的方法.

I'll point out that I have a version of the fact that use a forloop that is much faster than both of those 2 methods so there is clearly space for improvement but I wouldn't expect anything faster than the forloop.

推荐答案

递归版本的缓慢之处在于需要在每次调用时解析命名(自变量)变量.我提供了一种不同的递归实现,它仅具有一个参数,并且工作速度稍快.

The slowness of the recursive version comes from the need to resolve on each call the named (argument) variables. I have provided a different recursive implementation that has only one argument and it works slightly faster.

$ cat fact.py 
def factorial_recursive1(x):
    if x <= 1:
        return 1
    else:
        return factorial_recursive1(x-1)*x

def factorial_recursive2(x, rest=1):
    if x <= 1:
        return rest
    else:
        return factorial_recursive2(x-1, rest*x)

def factorial_reduce(x):
    if x <= 1:
        return 1
    return reduce(lambda a, b: a*b, xrange(1, x+1))

# Ignore the rest of the code for now, we'll get back to it later in the answer
def range_prod(a, b):
    if a + 1 < b:
        c = (a+b)//2
        return range_prod(a, c) * range_prod(c, b)
    else:
        return a
def factorial_divide_and_conquer(n):
    return 1 if n <= 1 else range_prod(1, n+1)

$ ipython -i fact.py 
In [1]: %timeit factorial_recursive1(400)
10000 loops, best of 3: 79.3 µs per loop
In [2]: %timeit factorial_recursive2(400)
10000 loops, best of 3: 90.9 µs per loop
In [3]: %timeit factorial_reduce(400)
10000 loops, best of 3: 61 µs per loop

由于在您的示例中涉及到非常多的数字,因此最初我怀疑性能差异可能是由于乘法顺序造成的.在每次迭代中,将较大的部分乘积乘以下一个数字,该乘积与乘积中的位数/位数成正比,因此这种方法的时间复杂度为O(n ),其中n为最终乘积中的位数.相反,最好使用分治法,其中最终结果是两个近似相等的长值的乘积,每个值都以相同的方式递归计算.因此,我也实现了该版本(请参见上面的代码factorial_divide_and_conquer(n)).正如您在下面看到的那样,对于小参数(由于命名参数存在相同的问题),它仍然输给基于reduce()的版本,但是对于大参数,它的性能却胜过它.

Since in your example very large numbers are involved, initially I suspected that the performance difference might be due to the order of multiplication. Multiplying on every iteration a large partial product by the next number is proportional to the number of digits/bits in the product, so the time complexity of such a method is O(n), where n is the number of bits in the final product. Instead it is better to use a divide and conquer technique, where the final result is obtained as a product of two approximately equally long values each of which is computed recursively in the same manner. So I implemented that version too (see factorial_divide_and_conquer(n) in the above code) . As you can see below it still loses to the reduce()-based version for small arguments (due to the same problem with named parameters) but outperforms it for large arguments.

In [4]: %timeit factorial_divide_and_conquer(400)
10000 loops, best of 3: 90.5 µs per loop
In [5]: %timeit factorial_divide_and_conquer(4000)
1000 loops, best of 3: 1.46 ms per loop
In [6]: %timeit factorial_reduce(4000)
100 loops, best of 3: 3.09 ms per loop

更新

尝试使用x=4000运行factorial_recursive?()版本会达到默认递归限制,因此该限制必须为:

Trying to run the factorial_recursive?() versions with x=4000 hits the default recursion limit, so the limit must be increased:

In [7]: sys.setrecursionlimit(4100)
In [8]: %timeit factorial_recursive1(4000)
100 loops, best of 3: 3.36 ms per loop
In [9]: %timeit factorial_recursive2(4000)
100 loops, best of 3: 7.02 ms per loop

这篇关于python中的慢递归的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-28 09:06