问题描述
我正在使用ubuntu 16.04,tensorflow 1.3
I am using ubuntu 16.04, tensorflow 1.3
具有约1700万权重的网络
A network with ~ 17M weights
-
图像大小为400x1000,批处理大小为4,在图形构建期间:
image size 400x1000, batch size 4, during graph construction:
图片尺寸为300x750,批量为4,在图形构建期间:
image size 300x750, batch size 4, during graph construction:
图片大小为300x740,批处理大小为1,在图形构建期间:
image size 300x740, batch size 1, during graph construction:
因此,所有三个实验所请求的内存是相同的.我的问题是1700万重锤真的需要这么大的内存吗?以及为什么所需的内存不会随图像大小和批处理大小的不同而改变?
So, the memory requested is the same for all the three experiment. My question is does 17M weights really need such a huge amount of memory? And why the required memory doesn't change with different images sizes and batch sizes ?
推荐答案
这可能是因为您存储了很多中间结果.在运行sess.run之后,您分配了一些新的内存来存储新的张量结果,但是在添加了新的分配内存之后,在主机上分配的总内存超过了32GB.请检查您在运行时使用的主机内存(而非gpu内存).在这种情况下,您需要降低主机内存分配.也许将其存储到硬盘是一个不错的选择.
It could because you stored a lot of middles results. After you run sess.run, you alloc some new memory to store the new tensor result, but after adding the new alloc memory, the total memory alloced on your host is more than 32GB. Please check your host memory (not gpu memory) used during the runtime. If that is the case, you need to lower your host memory allocing. Maybe store it to harddisk is a good choice.
这篇关于所需的Tensorflow内存无法随批次大小和图像大小扩展的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!