s中使用EfficientNet时出现ResourceExhau

s中使用EfficientNet时出现ResourceExhau

本文介绍了在Keras中使用EfficientNet时出现ResourceExhaustedError的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在使用Google colab.使用 EfficientNetB3 时,出现以下错误
资源耗尽:分配带有shape [15,95,95,192]并键入float的张量时,OOM

I am using google colab. While using EfficientNetB3 i am getting the following error
Resource exhausted: OOM when allocating tensor with shape[15,95,95,192] and type float

我理解这一点,因为我的数据不适合GPU.但是当我尝试 InceptionResNetV2 时,我没有得到任何错误.

I understand this because my data does not fit in GPU. But when I try InceptionResNetV2 i did not get any error.

EfficientNetB3 中可训练参数的数量为 22,220,824
InceptionResNetV2 中可训练参数的数量为 109,380,744

Number of trainable parameters in EfficientNetB3 is 22,220,824
Number of trainable parameters in InceptionResNetV2 is 109,380,744

InceptionResNetV2 中可训练参数的数量比 EfficientNetB3 多5倍.因此,我期望 InceptionResNetV2 引发错误,而不是 EfficientNetB3 .

Number of trainable parameters in InceptionResNetV2 are 5 time more than EfficientNetB3. So I am expecting InceptionResNetV2to throw error not EfficientNetB3.

有人知道为什么我在 EfficientNetB3 中遇到资源错误吗?

Any idea why I am getting resource error in EfficientNetB3?

注意:我正在使用两个并行网络,这些参数是两个网络参数的总和.

Note: I am using two parallel networks and these parameters are the sum of both network's parameters.

推荐答案

所有论文似乎都在使用TPU运行高效网络.我觉得还有其他事情正在使它使用更多的内存.我同意这是不直观的,因为efficiencyNets中的培训参数较少.但是,看来您确实需要使用TPU来执行此操作.因此,基本上,这将需要使用一些云服务来使您能够访问TPU等.

All the papers seem be using TPUs to run the efficientNets. I have a feeling there is something else that is making it use far more memory. I agree it isn't intuitive since there is less training params in efficientNets. However, it does seem you need to actually be using TPUs to do it. So basically this would require using some cloud service that gives you access to TPUs ect...

这篇关于在Keras中使用EfficientNet时出现ResourceExhaustedError的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-13 19:50