问题描述
我创建了一个经过微调的网络,该网络使用vgg16作为基础.我正在使用Python进行深度学习中的5.4.2可视化CovNet过滤器>(与Keras博客上的指南非常相似,以可视化convnet过滤器此处).
I have a fine-tuned network that I created which uses vgg16 as it's base. I am following section 5.4.2 Visualizing CovNet Filters in Deep Learning With Python (which is very similar to the guide on the Keras blog to visualize convnet filters here).
该指南仅使用vgg16网络.我的微调模型使用vgg16模型作为基础,例如:
The guide simply uses the vgg16 network. My fine tuned model uses the vgg16 model as the base, for example:
model.summary()
Layer (type) Output Shape Param #
=======================================================================
vgg16 (Model) (None, 4, 4, 512) 14714688
_______________________________________________________________________
flatten_1 (Flatten) (None, 8192) 0
_______________________________________________________________________
dense_7 (Dense) (None, 256) 2097408
_______________________________________________________________________
dense_8 (Dense) (None, 3) 771
========================================================================
Total params: 16,812,867
Trainable params: 16,812,867
Non-trainable params: 0
我在运行此行时遇到问题:grads = K.gradients(loss, model.input)[0]
在使用我的微调网络时,我得到的结果是"NoneType"
I'm running into an issue when I run this line: grads = K.gradients(loss, model.input)[0]
where when I use my fine tuned network I get a result that's a "NoneType"
以下是指南中的代码:
> from keras.applications import VGG16
> from keras import backend as K
>
> model = VGG16(weights='imagenet',
> include_top=False)
>
> layer_name = 'block3_conv1'
> filter_index = 0
>
> layer_output = model.get_layer(layer_name).output
> loss = K.mean(layer_output[:, :, :, filter_index])
>
> grads = K.gradients(loss, model.input)[0]
要在我的微调模型上重现,我使用了 exact 相同的代码,除外我显然更改了导入的模型:
To reproduce the on my fine tuned model, I've used the exact same code, except I obviously changed the model that I imported:
model = keras.models.load_model(trained_models_dir + 'fine_tuned_model.h5')
...而且我还必须索引到嵌套的Model对象(如上所示,我的第一层是Model对象)才能获得'block2_con1'层:
...and I also had to index into the nested Model object (my first layer is a Model object as is shown above) to get the 'block2_con1' layer:
my_Model_object = 'vgg16'
layer_name = 'block3_conv1'
filter_index = 0
layer_output =
model.get_layer(my_Model_object).get_layer(layer_name).output
您知道为什么在我的微调网络上运行grads = K.gradients(loss, model.input)[0]
会导致"NoneType"吗?
any idea why running grads = K.gradients(loss, model.input)[0]
on my fine tuned network would result in a "NoneType"??
谢谢.
推荐答案
已解决:我必须使用:
grads = K.gradients(loss, model.get_layer(my_Model_object).get_layer('input_1').input)[0]
而不只是:
grads = K.gradients(loss, model.input)[0]
因为两者都令人困惑
model.get_layer(my_Model_object).get_layer('input_1').input)[0]
和
model.input[0]
打印相同的东西,并且具有相同的类型.
print the same thing and are of the same type.
这篇关于使用我自己的微调网络可视化ConvNet过滤器,从而生成"NoneType"运行时:K.gradients(loss,model.input)[0]的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!