不再需要时如何从内存中释放张量

不再需要时如何从内存中释放张量

本文介绍了不再需要时如何从内存中释放张量?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我有一个假设图,其中有一系列计算如下:

I have hypothetical graph which has a series of computations as follows:

a_0 = tf.placeholder()
a_1 = some_op_1(a_0)
a_2 = some_op_2(a_1)
a_3 = some_op_3(a_2)

在计算a_3时注意;a_0a_1 不是必需的,因此可以在为 a_3 分配内存之前丢弃它们.有没有办法让 TensorFlow 执行这种内存优化(同意有一些时间成本)?

Observe that when computing a_3; a_0 and a_1 are not needed and hence they can be discarded prior to allocating memory for a_3. Is there some way to ask TensorFlow to perform this memory optimization (agree that there is some cost in time)?

请注意,这与 this 关于仅在需要时分配内存的问题.

Please note that this is not the same as this question about allocating memory only when needed.

这个网络不会被训练,所以不用担心反向传播.

This network will not be trained, so don't worry about backprop.

推荐答案

TensorFlow 使用引用计数来释放不再使用的张量所使用的内存.a_0a_1 的值将在不再引用它们时被删除,并且在 TensorFlow 的最新版本(1.0 后每晚构建)中的一些操作如果它们具有相同的形状和元素类型,甚至会为输出重用输入缓冲区.

TensorFlow uses reference counting to release the memory used by a tensor as soon as it is no longer used. The values of a_0 and a_1 will be deleted as soon as there are no more references to them, and in the latest builds of TensorFlow (post-1.0 nightly builds) some operations will even reuse the input buffer for the output if they have the same shape and element type.

这篇关于不再需要时如何从内存中释放张量?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-16 03:08