问题描述
我试图从基于Tensorflow Slim
Tensorflow Hub模块的输出中重现使用Tensorflow Slim
模块进行nofollow noreferrer>检查点.但是,我似乎无法获得预期的输出.例如,让我们加载所需的库,创建示例输入,并使用占位符填充数据:
I am trying to reproduce the output from a Tensorflow Hub
module that is based on a Tensorflow Slim
checkpoint, using the Tensorflow Slim
modules. However, I can't seem to get the expected output. For example, let us load the required libraries, create a sample input and the placeholder to feed the data:
import tensorflow_hub as hub
from tensorflow.contrib.slim import nets
images = np.random.rand(1,224,224,3).astype(np.float32)
inputs = tf.placeholder(shape=[None, 224, 224, 3], dtype=tf.float32)
加载TF Hub
模块:
resnet_hub = hub.Module("https://tfhub.dev/google/imagenet/resnet_v2_152/feature_vector/3")
features_hub = resnet_hub(inputs, signature="image_feature_vector", as_dict=True)["resnet_v2_152/block4"]
现在,让我们对TF Slim
做同样的事情,并创建一个将加载检查点的加载器:
Now, let's do the same with TF Slim
and create a loader that will load the checkpoint:
with slim.arg_scope(nets.resnet_utils.resnet_arg_scope()):
_, end_points = nets.resnet_v2.resnet_v2_152(image, is_training=False)
features_slim = end_points["resnet_v2_152/block4"]
loader = tf.train.Saver(tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope="resnet_v2_152"))
现在,一旦一切就绪,我们就可以测试输出是否相同:
Now, once we have everything in place we can test whether the outputs are the same:
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
loader.restore(sess, "resnet_v2_152_2017_04_14/resnet_v2_152.ckpt")
slim_output = sess.run(features_slim, feed_dict={inputs: images})
hub_output = sess.run(features_hub, feed_dict={inputs: images})
np.testing.assert_array_equal(slim_output, hub_output)
但是,断言失败,因为两个输出不相同.我认为这是因为TF Hub
对TF Slim
实现缺少的输入使用了内部预处理.
However, the assertion fails because the two outputs are not the same. I assume that this is because TF Hub
uses an internal preprocessing of the inputs that the TF Slim
implementation lacks.
让我知道您的想法!
推荐答案
这些Hub模块将其输入从标准范围[0,1]扩展到相应的苗条检查点对其进行培训的预处理所期望的值(通常为[- 1,+ 1](用于初始样式"预处理).将它们传递给相同的输入可以解释很大的不同.甚至在进行线性重新缩放以解决该问题之后,达到复合数值误差的差异也不会令我感到惊讶(考虑到TF内部的许多自由度),但是主要差异可能表明存在错误.
Those Hub modules scale their inputs from the canonical range [0,1] to what the respective slim checkpoint expects from the preprocessing it was trained with (typically [-1,+1] for "Inception-style" preprocessing). Passing them the same inputs can explain a large difference. Even after linear rescaling to fix that, a difference up to compounded numerical error wouldn't surprise me (given the many degrees of freedom inside TF), but major discrepancies might indicate a bug.
这篇关于使用Tensorflow Slim复制Tensorflow Hub模块输出的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!