本文介绍了Python - 解决内存泄漏的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个 Python 程序,它运行一系列实验,没有打算将数据从一个测试存储到另一个测试.我的代码包含我完全找不到的内存泄漏(我查看了 其他线程 关于内存泄漏).由于时间限制,我不得不放弃寻找泄漏点,但如果我能够隔离每个实验,该程序可能会运行足够长的时间来产生我需要的结果.

  • 在单独的线程中运行每个测试会有帮助吗?
  • 是否有其他方法可以隔离泄漏的影响?

具体情况详述

  • 我的代码有两部分:实验运行程序和实际实验代码.
  • 虽然运行所有实验的代码和每个实验使用的代码之间没有共享全局变量,但某些类/函数必须共享.
  • 实验运行程序不仅仅是一个可以轻松放入 shell 脚本的简单 for 循环.它首先根据配置参数决定需要运行的测试,然后运行测试,然后以特定方式输出数据.
  • 我尝试手动调用垃圾收集器,以防问题仅仅是没有运行垃圾收集,但这不起作用

更新

Gnibbler 的回答实际上让我发现我的 ClosenessCalculation 对象存储了每次计算过程中使用的所有 数据.然后我用它来手动删除一些似乎修复了内存问题的链接.

解决方案

你可以使用类似的东西来帮助追踪内存泄漏

>>>从集合导入 defaultdict>>>从 gc 导入 get_objects>>>before = defaultdict(int)>>>after = defaultdict(int)>>>对于 get_objects() 中的 i:... before[type(i)] += 1...

现在假设测试泄漏了一些内存

>>>Leaked_things = [[x] for x in range(10)]>>>对于 get_objects() 中的 i:... [type(i)] += 1 之后...>>>打印 [(k, after[k] - before[k]) for k in after if after[k] - before[k]][(<输入列表">, 11)]

11 因为我们泄露了一个包含 10 个列表的列表

I have a Python program that runs a series of experiments, with no data intended to be stored from one test to another. My code contains a memory leak which I am completely unable to find (I've look at the other threads on memory leaks). Due to time constraints, I have had to give up on finding the leak, but if I were able to isolate each experiment, the program would probably run long enough to produce the results I need.

  • Would running each test in a separate thread help?
  • Are there any other methods of isolating the effects of a leak?

Detail on the specific situation

  • My code has two parts: an experiment runner and the actual experiment code.
  • Although no globals are shared between the code for running all the experiments and the code used by each experiment, some classes/functions are necessarily shared.
  • The experiment runner isn't just a simple for loop that can be easily put into a shell script. It first decides on the tests which need to be run given the configuration parameters, then runs the tests then outputs the data in a particular way.
  • I tried manually calling the garbage collector in case the issue was simply that garbage collection wasn't being run, but this did not work

Update

Gnibbler's answer has actually allowed me to find out that my ClosenessCalculation objects which store all of the data used during each calculation are not being killed off. I then used that to manually delete some links which seems to have fixed the memory issues.

解决方案

You can use something like this to help track down memory leaks

>>> from collections import defaultdict
>>> from gc import get_objects
>>> before = defaultdict(int)
>>> after = defaultdict(int)
>>> for i in get_objects():
...     before[type(i)] += 1
...

now suppose the tests leaks some memory

>>> leaked_things = [[x] for x in range(10)]
>>> for i in get_objects():
...     after[type(i)] += 1
...
>>> print [(k, after[k] - before[k]) for k in after if after[k] - before[k]]
[(<type 'list'>, 11)]

11 because we have leaked one list containing 10 more lists

这篇关于Python - 解决内存泄漏的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-23 05:04