的目标是创建一个大数据框架,我可以在该数据框架上执行一些操作,例如平均各列中的每一行等。

问题是,随着数据帧的增加,每次迭代的速度也随之增加,因此我无法完成计算。

注意:我的df只有两列,其中col1是不必要的,因此为什么要加入它。 col1是字符串,而col2是浮点数。行数为3k。下面是一个示例:

folder_paths    float
folder/Path     1.12630137
folder/Path2    1.067517426
folder/Path3    1.06443264
folder/Path4    1.049119625
folder/Path5    1.039635769

问题关于如何使此代码更高效以及瓶颈在哪里的任何想法?另外,我不确定merge是否行得通。

当前想法我正在考虑的一种解决方案是分配内存并指定列类型:col1是字符串,而col2是浮点数。
df = pd.DataFrame() # create an empty data frame

for i in range(1000):
    if i is 0:
        df = generate_new_df(arg1, arg2)
    else:
        df = pd.merge(df, generate_new_df(arg1, arg2), on='col1', how='outer')

我也尝试过使用pd.concat,但结果非常相似:每次迭代后的时间都会增加
df = pd.concat([df, get_os_is_from_folder(pnlList, sampleSize, randomState)], axis=1)

与pd.concat的结果
run 1
time 0.34s
run 2
time 0.34s
run 3
time 0.32s
run 4
time 0.33s
run 5
time 0.42s
run 6
time 0.41s
run 7
time 0.45s
run 8
time 0.46s
run 9
time 0.54s
run 10
time 0.58s
run 11
time 0.73s
run 12
time 0.72s
run 13
time 0.79s
run 14
time 0.87s
run 15
time 0.95s
run 16
time 1.06s
run 17
time 1.19s
run 18
time 1.24s
run 19
time 1.37s
run 20
time 1.57s
run 21
time 1.68s
run 22
time 1.93s
run 23
time 1.86s
run 24
time 1.96s
run 25
time 2.11s
run 26
time 2.32s
run 27
time 2.42s
run 28
time 2.57s

使用列表的dfListpd.concat产生相似的结果。以下是代码和结果。
dfList=[]
for i in range(1000):
    dfList.append(generate_new_df(arg1, arg2))

df = pd.concat(dfList, axis=1)

结果:
run 1 took 0.35 sec.
run 2 took 0.26 sec.
run 3 took 0.3 sec.
run 4 took 0.33 sec.
run 5 took 0.45 sec.
run 6 took 0.49 sec.
run 7 took 0.54 sec.
run 8 took 0.51 sec.
run 9 took 0.51 sec.
run 10 took 1.06 sec.
run 11 took 1.74 sec.
run 12 took 1.47 sec.
run 13 took 1.25 sec.
run 14 took 1.04 sec.
run 15 took 1.26 sec.
run 16 took 1.35 sec.
run 17 took 1.7 sec.
run 18 took 1.73 sec.
run 19 took 6.03 sec.
run 20 took 1.63 sec.
run 21 took 1.93 sec.
run 22 took 1.84 sec.
run 23 took 2.25 sec.
run 24 took 2.65 sec.
run 25 took 6.84 sec.
run 26 took 2.88 sec.
run 27 took 2.58 sec.
run 28 took 2.81 sec.
run 29 took 2.84 sec.
run 30 took 2.99 sec.
run 31 took 3.12 sec.
run 32 took 3.48 sec.
run 33 took 3.35 sec.
run 34 took 3.6 sec.
run 35 took 4.0 sec.
run 36 took 4.41 sec.
run 37 took 4.88 sec.
run 38 took 4.92 sec.
run 39 took 4.78 sec.
run 40 took 5.02 sec.
run 41 took 5.32 sec.
run 42 took 5.31 sec.
run 43 took 5.78 sec.
run 44 took 5.77 sec.
run 45 took 6.15 sec.
run 46 took 6.4 sec.
run 47 took 6.84 sec.
run 48 took 7.08 sec.
run 49 took 7.48 sec.
run 50 took 7.91 sec.

最佳答案

仍然不清楚您的问题到底是什么,但我要假设的主要瓶颈是您试图一次将大量数据帧全部加载到列表中,并且遇到了内存/分页问题。考虑到这一点,这是一种可能会有所帮助的方法,但由于我无权访问generate_new_df函数或数据,因此您必须自己对其进行测试。

该方法是对this answer使用merge_with_concat函数的一种变体,首先将较小数量的数据帧合并在一起,然后立即将它们全部合并在一起。

例如,如果您有1000个数据帧,则可以一次将100个数据帧合并在一起,以提供10个大数据帧,然后将最后10个数据帧合并在一起作为最后一步。这应该确保您在任何时候都没有太大的数据框列表。

您可以使用下面的两个函数(假设您的generate_new_df函数将文件名作为其参数之一)并执行类似的操作:

def chunk_dfs(file_names, chunk_size):
    """" yields n dataframes at a time where n == chunksize """
    dfs = []
    for f in file_names:
        dfs.append(generate_new_df(f))
        if len(dfs) == chunk_size:
            yield dfs
            dfs  = []
    if dfs:
        yield dfs


def merge_with_concat(dfs, col):
    dfs = (df.set_index(col, drop=True) for df in dfs)
    merged = pd.concat(dfs, axis=1, join='outer', copy=False)
    return merged.reset_index(drop=False)

col_name = "name_of_column_to_merge_on"
file_names = ['list/of', 'file/names', ...]
chunk_size = 100

merged = merge_with_concat((merge_with_concat(dfs, col_name) for dfs in chunk_dfs(file_names, chunk_size)), col_name)

关于python - 如何在 Pandas 中有效地加入/合并/连接大数据框?,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/45217120/

10-12 18:26
查看更多