本文介绍了Python主处理:处理池较慢?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

因此,最近几天我一直在搞弄python的multiprocessing lib,我真的很喜欢处理池.这很容易实现,我可以看到很多用途.我已经做过一些我以前听说过的项目,以熟悉它,最近完成了一个程序,该程序对that子手进行了蛮力的游戏.

So I've been messing around with python's multiprocessing lib for the last few days and I really like the processing pool. It's easy to implement and I can visualize a lot of uses. I've done a couple of projects I've heard about before to familiarize myself with it and recently finished a program that brute forces games of hangman.

我在做一个执行时间比较,将单线程和通过处理池的100万到200万之间的所有质数求和.现在,对于the子手来说,将游戏置于处理池中可以将执行时间提高约8倍(i7具有8核),但是当磨光这些质数时,它实际上将处理时间增加了将近系数为4.

Anywho, I was doing an execution time compairison of summing all the prime numbers between 1 million and 2 million both single threaded and through a processing pool. Now, for the hangman cruncher, putting the games in a processing pool improved execution time by about 8 times (i7 with 8 cores), but when grinding out these primes, it actually increased processing time by almost a factor of 4.

有人可以告诉我为什么吗?这是任何有兴趣查看或测试它的代码:

Can anyone tell me why this is? Here is the code for anyone interested in looking at or testing it:

#!/user/bin/python.exe
import math
from multiprocessing import Pool

global primes
primes = []

def log(result):
    global primes

    if result:
        primes.append(result[1])

def isPrime( n ):
    if n < 2:
        return False
    if n == 2:
        return True, n

    max = int(math.ceil(math.sqrt(n)))
    i = 2
    while i <= max:
        if n % i == 0:
            return False
        i += 1
    return True, n


def main():

   global primes

   #pool = Pool()

   for i in range(1000000, 2000000):
       #pool.apply_async(isPrime,(i,), callback = log)
       temp = isPrime(i)
       log(temp)

   #pool.close()
   #pool.join()

   print sum(primes)

   return

if __name__ == "__main__":
    main()

它当前将在单个线程中运行,以在处理池中运行,取消注释池语句并注释掉主for循环中的其他行.

It'll currently run in a single thread, to run through the processing pool, uncomment the pool statements and comment out the other lines in the main for loop.

推荐答案

使用multiprocessing的最有效方法是将工作划分为n个相等大小的块,其中n个大小等于池的大小.系统上的内核数.这样做的原因是启动子流程并在它们之间进行通信的工作量很大.如果工作量小于工作块数,则IPC的开销将变得很大.

the most efficient way to use multiprocessing is to divide the work into n equal sized chunks, with n the size of the pool, which should be approximately the number of cores on your system. The reason for this is that the work of starting subprocesses and communicating between them is quite large. If the size of the work is small compared to the number of work chunks, then the overhead of IPC becomes significant.

在您的情况下,您要进行多重处理以分别处理每个素数.解决该问题的更好方法是为每个工作人员传递一个值范围(可能只是一个起始值和结束值),并使其返回找到的那个范围内的所有素数.

In your case, you're asking multiprocessing to process each prime individually. A better way to deal with the problem is to pass each worker a range of values, (probably just a start and end value) and have it return all of the primes in that range it found.

在识别大素数的情况下,完成的工作会随着起始值的增加而增加,因此您可能不希望将总范围精确地划分为n个块,而是将n * k个相等的块划分为k一些合理的小数字,例如10-100.这样,当某些工人比其他工人先完成时,还有更多工作要做,并且可以在所有工人之间有效平衡.

In the case of identifying large-ish primes, the work done grows with the starting value, and so You probably don't want to divide the total range into exactly n chunks, but rather n*k equal chunks, with k some reasonable, small number, say 10 - 100. that way, when some workers finish before others, there's more work left to do and it can be balanced efficiently across all workers.

这是一个改进的示例,以显示该解决方案的外观.我已进行了尽可能少的更改,以便您可以将苹果与苹果进行比较.

Here's an improved example to show what that solution might look like. I've changed as little as possible so you can compare apples to apples.

#!/user/bin/python.exe
import math
from multiprocessing import Pool

global primes
primes = set()

def log(result):
    global primes

    if result:
        # since the result is a batch of primes, we have to use
        # update instead of add (or for a list, extend instead of append)
        primes.update(result)

def isPrime( n ):
    if n < 2:
        return False
    if n == 2:
        return True, n

    max = int(math.ceil(math.sqrt(n)))
    i = 2
    while i <= max:
        if n % i == 0:
            return False
        i += 1
    return True, n

def isPrimeWorker(start, stop):
    """
    find a batch of primes
    """
    primes = set()
    for i in xrange(start, stop):
        if isPrime(i):
            primes.add(i)

    return primes



def main():

    global primes

    pool = Pool()

    # pick an arbitrary chunk size, this will give us 100 different
    # chunks, but another value might be optimal
    step = 10000

    # use xrange instead of range, we don't actually need a list, just
    # the values in that range.
    for i in xrange(1000000, 2000000, step):
        # call the *worker* function with start and stop values.
        pool.apply_async(isPrimeWorker,(i, i+step,), callback = log)

    pool.close()
    pool.join()

    print sum(primes)

    return

if __name__ == "__main__":
    main()

这篇关于Python主处理:处理池较慢?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-04 18:27