本文介绍了 pandas 记忆错误的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个约有50,000行300列的csv文件.执行以下操作会在Pandas(python)中引起内存错误:

I have a csv file with ~50,000 rows and 300 columns. Performing the following operation is causing a memory error in Pandas (python):

merged_df.stack(0).reset_index(1)

数据框如下:

GRID_WISE_MW1   Col0    Col1    Col2 .... Col300
7228260         1444    1819    2042
7228261         1444    1819    2042

我正在使用最新的熊猫(0.13.1),并且行数较少(〜2,000)的数据框不会出现该错误

I am using latest pandas (0.13.1) and the bug does not occur with dataframes with fewer rows (~2,000)

谢谢!

推荐答案

因此它占用了我的64位linux(32GB)内存,略少于2GB.

So it takes on my 64-bit linux (32GB) memory, a little less than 2GB.

In [5]: def f():
       df = DataFrame(np.random.randn(50000,300))
       df.stack().reset_index(1)


In [6]: %memit f()
maximum of 1: 1791.054688 MB per loop

由于您未指定.这根本无法在32位上使用(因为您通常无法分配2GB的连续块),但是如果您具有合理的交换/内存,则应该可以使用.

Since you didn't specify. This won't work on 32-bit at all (as you can't usually allocate a 2GB contiguous block), but should work if you have reasonable swap / memory.

这篇关于 pandas 记忆错误的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

11-02 13:51