如何补偿丢失的pool

如何补偿丢失的pool

本文介绍了Python 2.7:如何补偿丢失的pool.starmap?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经定义了此功能

def writeonfiles(a,seed):
    random.seed(seed)

    f = open(a, "w+")
    for i in range(0,10):
        j = random.randint(0,10)
        #print j
        f.write(j)
    f.close()

其中a是包含文件路径的字符串,seed是整数种子.我想以这样一种方式并行化一个简单的程序,使每个内核都采用我提供的可用路径之一,播种其随机数生成器并在该文件上写入一些随机数,例如,如果我通过了向量

vector = [Test/file1.txt, Test/file2.txt]

和种子

seeds = (123412, 989898),

将功能赋予第一个可用的核心

writeonfiles(Test/file1.txt, 123412)

和第二个相同的函数带有不同的参数:

writeonfiles(Test/file2.txt, 989898)

我在Stackoverflow上浏览了很多类似的问题,但是我无法解决任何问题.我试过的是:

def writeonfiles_unpack(args):
    return writeonfiles(*args)
if __name__ == "__main__":
     folder = ["Test/%d.csv" %i for i in range(0,4)]
     seed = [234124, 663123, 12345 ,123833]
     p = multiprocessing.Pool()
     p.map(writeonfiles, (folder,seed))

并给我TypeError:writeonfiles()恰好接受2个参数(给定1个).

我也尝试过

if __name__ == "__main__":
    folder = ["Test/%d.csv" %i for i in range(0,4)]
    seed = [234124, 663123, 12345 ,123833]
    p = multiprocessing.Process(target=writeonfiles, args= [folder,seed])
    p.start()

但这给了我
种子中的文件"/usr/lib/python2.7/random.py",第120行 超级(随机,自我).种子(a)TypeError:无法散列的类型:列表"

最后,我尝试了contextmanager

 @contextmanager
 def poolcontext(*args, **kwargs):
     pool = multiprocessing.Pool(*args, **kwargs)
     yield pool
     pool.terminate()

if __name__ == "__main__":
    folder = ["Test/%d" %i for i in range(0,4)]
    seed = [234124, 663123, 12345 ,123833]
    a = zip(folder, seed)
    with poolcontext(processes = 3) as pool:
    results = pool.map(writeonfiles_unpack,a )

,结果为 在get中的文件"/usr/lib/python2.7/multiprocessing/pool.py",第572行 提高自我价值

TypeError:模块"对象不可调用

解决方案

Python 2.7缺少Python 3.3+中的starmap池方法.您可以通过使用包装器装饰目标函数来克服此问题,该包装器将对参数元组进行解包并调用目标函数:

import os
from multiprocessing import Pool
import random
from functools import wraps


def unpack(func):
    @wraps(func)
    def wrapper(arg_tuple):
        return func(*arg_tuple)
    return wrapper

@unpack
def write_on_files(a, seed):
    random.seed(seed)
    print("%d opening file %s" % (os.getpid(), a))  # simulate
    for _ in range(10):
        j = random.randint(0, 10)
       print("%d writing %d to file %s" % (os.getpid(), j, a))  # simulate


if __name__ == '__main__':

    folder = ["Test/%d.csv" % i for i in range(0, 4)]
    seed = [234124, 663123, 12345, 123833]

    arguments = zip(folder, seed)

    pool = Pool(4)
    pool.map(write_on_files, iterable=arguments)
    pool.close()
    pool.join()

I have defined this function

def writeonfiles(a,seed):
    random.seed(seed)

    f = open(a, "w+")
    for i in range(0,10):
        j = random.randint(0,10)
        #print j
        f.write(j)
    f.close()

Where a is a string containing the path of the file and seed is an integer seed.I want to parallelize a simple program in such a way that each core takes one of the available paths that I give in, seeds its random generator and write some random numbers on that files, so, for example, if I pass thevector

vector = [Test/file1.txt, Test/file2.txt]

and the seeds

seeds = (123412, 989898),

it gives to the first available core the function

writeonfiles(Test/file1.txt, 123412)

and to the second one the same function with different arguments:

writeonfiles(Test/file2.txt, 989898)

I have looked through a lot of similar questions here on Stackoverflow, but I cannot make any solution work.What I tried is:

def writeonfiles_unpack(args):
    return writeonfiles(*args)
if __name__ == "__main__":
     folder = ["Test/%d.csv" %i for i in range(0,4)]
     seed = [234124, 663123, 12345 ,123833]
     p = multiprocessing.Pool()
     p.map(writeonfiles, (folder,seed))

and gives me TypeError: writeonfiles() takes exactly 2 arguments (1 given).

I tried also

if __name__ == "__main__":
    folder = ["Test/%d.csv" %i for i in range(0,4)]
    seed = [234124, 663123, 12345 ,123833]
    p = multiprocessing.Process(target=writeonfiles, args= [folder,seed])
    p.start()

But it gives me
File "/usr/lib/python2.7/random.py", line 120, in seed super(Random, self).seed(a)TypeError: unhashable type: 'list'

Finally, I tried the contextmanager

 @contextmanager
 def poolcontext(*args, **kwargs):
     pool = multiprocessing.Pool(*args, **kwargs)
     yield pool
     pool.terminate()

if __name__ == "__main__":
    folder = ["Test/%d" %i for i in range(0,4)]
    seed = [234124, 663123, 12345 ,123833]
    a = zip(folder, seed)
    with poolcontext(processes = 3) as pool:
    results = pool.map(writeonfiles_unpack,a )

and it results in File "/usr/lib/python2.7/multiprocessing/pool.py", line 572, in get raise self._value

TypeError: 'module' object is not callable

解决方案

Python 2.7 lacks the starmap pool-method from Python 3.3+ . You can overcome this by decorating your target function with a wrapper, which unpacks the argument-tuple and calls the target function:

import os
from multiprocessing import Pool
import random
from functools import wraps


def unpack(func):
    @wraps(func)
    def wrapper(arg_tuple):
        return func(*arg_tuple)
    return wrapper

@unpack
def write_on_files(a, seed):
    random.seed(seed)
    print("%d opening file %s" % (os.getpid(), a))  # simulate
    for _ in range(10):
        j = random.randint(0, 10)
       print("%d writing %d to file %s" % (os.getpid(), j, a))  # simulate


if __name__ == '__main__':

    folder = ["Test/%d.csv" % i for i in range(0, 4)]
    seed = [234124, 663123, 12345, 123833]

    arguments = zip(folder, seed)

    pool = Pool(4)
    pool.map(write_on_files, iterable=arguments)
    pool.close()
    pool.join()

这篇关于Python 2.7:如何补偿丢失的pool.starmap?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-19 10:49