问题描述
我有大约1亿行的熊猫数据框.在多核计算机上并行处理效果很好,每个核的利用率为100%.但是, executor.map()
的结果是一个生成器,因此为了实际收集处理后的结果,我遍历了该生成器.这非常非常慢(几小时),部分是因为它是单核,部分是因为循环.实际上,它比 my_function()
I have a pandas data frame of about 100M rows. Processing in parallel works very well on a multi-core machine, with 100% utilization of each core. However, the results of executor.map()
is a generator so in order to actually collect the processed results, I iterate through that generator. This is very, very slow (hours), in part because it's single core, in part because of the loop. In fact, it's much slower than the actual processing in the my_function()
是否有更好的方法(可能是并发和/或矢量化的)?
Is there a better way (perhaps concurrent and/or vectorized)?
将pandas 0.23.4(当前最新)与Python 3.7.0一起使用
Using pandas 0.23.4 (latest at this time) with Python 3.7.0
import concurrent
import pandas as pd
df = pd.DataFrame({'col1': [], 'col2': [], 'col3': []})
with concurrent.futures.ProcessPoolExecutor() as executor:
gen = executor.map(my_function, list_of_values, chunksize=1000)
# the following is single-threaded and also very slow
for x in gen:
df = pd.concat([df, x]) # anything better than doing this?
return df
推荐答案
以下是与您的案例有关的基准: https://stackoverflow.com/a/31713471/5588279
Here is a benchmark related to your case: https://stackoverflow.com/a/31713471/5588279
如您所见,concat(追加)多次无效.您应该只执行 pd.concat(gen)
.我相信underlyig实施会预先分配所有需要的内存.
As you can see, concat(append) multiple times is very inefficient. You should just do pd.concat(gen)
. I believe the underlyig implementation will preallocate all needed memory.
对于您而言,每次都会进行内存分配.
In your case, the memory allocation is done everytime.
这篇关于有效地合并并发的结果.未来并行执行?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!