本文介绍了Tensorflow:如何将自定义输入插入现有图形?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我已经下载了张量流GraphDef,它实现了一个VGG16 ConvNet,我这样做:

  Pl ['images' ] = tf.placeholder(tf.float32,
[None,448,448,3],
name =images)#batch x width x height x channels
with open(因为f:
fileContent = f.read()

graph_def = tf.GraphDef()
graph_def。 ParseFromString(fileContent)
tf.import_graph_def(graph_def,input_map = {images:Pl ['images']})

另外,我的图像特征与import / pool5 /的输出是同质的。



如何告诉我的图不想使用他的输入images,但张量import / pool5 /作为输入?



谢谢!

编辑

好的,我意识到我不是很清楚。这是情况:

我正尝试使用此使用预先训练好的VGG16(我使用GraphDef格式),实现投资回报率合并。所以这里是我做的:



首先,我加载模型:

  tf.reset_default_graph()
with open(tensorflow-vgg16 / vgg16.tfmodel,
mode ='rb')as f:
fileContent = f.read( )
graph_def = tf.GraphDef()
graph_def.ParseFromString(fileContent)
graph = tf.get_default_graph()

然后,我创建我的占位符

  images = tf.placeholder(tf。 float32,
[None,448,448,3],
name =images)#batch x width x height x channels
boxes = tf.placeholder(tf.float32,
[None,5],#5 = [batch_id,x1,y1,x2,y2]
name =boxes)

我将图的第一部分的输出定义为conv5_3 / Relu

  tf.import_graph_def(graph_def,
input_map = {'images :images})
out_tensor = graph.get_tensor_by_name(import / conv5_3 / Relu:0)

因此, out_tensor 的形状 [无,14,14,512]



然后,我执行ROI池:

  [out_pool,argmax] = module.roi_pool(out_tensor ,
盒,
7,7,1.0 / 1)

code> out_pool.shape = N_Boxes_in_batch x 7 x 7 x 512 ,它与 pool5 是同类的。然后,我想将 out_pool 作为输入提供给 pool5 后面的操作,所以它看起来像

  tf.import_graph_def(graph.as_graph_def(),
input_map = {'import / pool5':out_pool})

但它不起作用,我有这个错误:

  ----------------------------------- ---------------------------------------- 
TypeError Traceback(最近的呼叫最后)
< ipython-input-89-527398d7344b> in< module>()
5
6 tf.import_graph_def(graph.as_graph_def(),
----> 7 input_map = {'import / pool5':out_pool})
8
9 final_out = graph.get_tensor_by_name(import / Relu_1:0)

/usr/local/lib/python3.4/dist-packages/tensorflow/python /framework/importer.py in import_graph_def(graph_def,input_map,return_elements,name,op_dict)
333#注意(mrry):如果图形包含一个循环,则完整形状信息
334#可能不是可用于此操作的输入。
- > 335 ops.set_shapes_for_outputs(op)
336
337#为此操作应用设备功能。

/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/ops.py in set_shapes_for_outputs(op)
1610 raise RuntimeError(没有形状函数注册对于标准操作:%s
1611%op.type)
- > 1612 shapes = shape_func(op)
1613 if len(op.outputs)!= len(shapes):
1614 raise RuntimeError(

/ home / hbenyounes / vqa / roi_pooling_op_grad .py in _roi_pool_shape(op)
13 channels = dims_data [3]
14 print(op.inputs [1] .name,op.inputs [1] .get_shape())
- - > 15 dims_rois = op.inputs [1] .get_shape()。as_list()
16 num_rois = dims_rois [0]
17

/ usr / local / lib / python3.4 / dist-packages / tensorflow / python / framework / tensor_shape.py in as_list(self)
745每个维度的整数列表或None。
746
- > 747 return [dim.value for dim in self._dims]
748
749 def as_proto(self):

TypeError:'NoneType'对象不是可迭代的

任何线索?

解决方案

我会做的是沿着这些方向行事:



- 首先检索张量的名称给出VGG16中pool5之后的3个完全连接层的权重和偏差。

要做到这一点,我会检查graph.as_graph_def()中n的n $ name。 ] 。
(它们可能看起来像是import / locali / weight:0,import / locali / bias:0等)

- 将它们放入python列表:

  weights_names = [import / local1 / weight:0,import / local2 / weight:0, import / local3 / weight:0] 
biases_names = [import / local1 / bias:0,import / local2 / bias:0,import / local3 / bias:0]

- 定义一个如下所示的函数:

<$ p $ ($ input_tensor,( - 1,7 * 7 * 512))
tmp = flatten
for xrange(layer_number):
tmp = tf.matmul(tmp,graph.get_tensor_by_name(weights_name [i]))
tmp = tf.nn.bias_add(tmp,graph.get_tensor_by_name( biasses_name [i]))
tmp = tf.nn.relu(tmp)
return tmp

然后使用函数定义张量:

  wanted_output = pool5_tofcX(out_pool)

然后你就是了完成!


I have downloaded a tensorflow GraphDef that implements a VGG16 ConvNet, which I use doing this :

Pl['images'] = tf.placeholder(tf.float32,
                          [None, 448, 448, 3],
                          name="images") #batch x width x height x channels
with open("tensorflow-vgg16/vgg16.tfmodel", mode='rb') as f:
    fileContent = f.read()

graph_def = tf.GraphDef()
graph_def.ParseFromString(fileContent)
tf.import_graph_def(graph_def, input_map={"images": Pl['images']})

Besides, I have image features that are homogeneous to the output of the "import/pool5/".

How can I tell my graph that don't want to use his input "images", but the tensor "import/pool5/" as input ?

Thank's !

EDIT

OK I realize I haven't been very clear. Here is the situation:

I am trying to use this implementation of ROI pooling, using a pre-trained VGG16, which I have in the GraphDef format. So here is what I do:

First of all, I load the model:

tf.reset_default_graph()
with open("tensorflow-vgg16/vgg16.tfmodel",
          mode='rb') as f:
    fileContent = f.read()
graph_def = tf.GraphDef()
graph_def.ParseFromString(fileContent)
graph = tf.get_default_graph()

Then, I create my placeholders

images = tf.placeholder(tf.float32,
                              [None, 448, 448, 3],
                              name="images") #batch x width x height x channels
boxes = tf.placeholder(tf.float32,
                             [None,5], # 5 = [batch_id,x1,y1,x2,y2]
                             name = "boxes")

And I define the output of the first part of the graph to be conv5_3/Relu

tf.import_graph_def(graph_def,
                    input_map={'images':images})
out_tensor = graph.get_tensor_by_name("import/conv5_3/Relu:0")

So, out_tensor is of shape [None,14,14,512]

Then, I do the ROI pooling:

[out_pool,argmax] = module.roi_pool(out_tensor,
                                    boxes,
                                    7,7,1.0/1)

With out_pool.shape = N_Boxes_in_batch x 7 x 7 x 512, which is homogeneous to pool5. I would then like to feed out_pool as an input to the op that comes just after pool5, so it would look like

tf.import_graph_def(graph.as_graph_def(),
                    input_map={'import/pool5':out_pool})

But it doesn't work, I have this error:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-89-527398d7344b> in <module>()
      5
      6 tf.import_graph_def(graph.as_graph_def(),
----> 7                     input_map={'import/pool5':out_pool})
      8
      9 final_out = graph.get_tensor_by_name("import/Relu_1:0")

/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/importer.py in import_graph_def(graph_def, input_map, return_elements, name, op_dict)
    333       # NOTE(mrry): If the graph contains a cycle, the full shape information
    334       # may not be available for this op's inputs.
--> 335       ops.set_shapes_for_outputs(op)
    336
    337       # Apply device functions for this op.

/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/ops.py in set_shapes_for_outputs(op)
   1610       raise RuntimeError("No shape function registered for standard op: %s"
   1611                          % op.type)
-> 1612   shapes = shape_func(op)
   1613   if len(op.outputs) != len(shapes):
   1614     raise RuntimeError(

/home/hbenyounes/vqa/roi_pooling_op_grad.py in _roi_pool_shape(op)
     13   channels = dims_data[3]
     14   print(op.inputs[1].name, op.inputs[1].get_shape())
---> 15   dims_rois = op.inputs[1].get_shape().as_list()
     16   num_rois = dims_rois[0]
     17

/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/tensor_shape.py in as_list(self)
    745       A list of integers or None for each dimension.
    746     """
--> 747     return [dim.value for dim in self._dims]
    748
    749   def as_proto(self):

TypeError: 'NoneType' object is not iterable

Any clue ?

解决方案

What I would do is something along those lines:

-First retrieve the names of the tensors representing the weights and biases of the 3 fully connected layers coming after pool5 in VGG16.
To do that I would inspect [n.name for n in graph.as_graph_def().node].(They probably look something like import/locali/weight:0, import/locali/bias:0, etc.)

-Put them in a python list:

weights_names=["import/local1/weight:0" ,"import/local2/weight:0" ,"import/local3/weight:0"]
biases_names=["import/local1/bias:0" ,"import/local2/bias:0" ,"import/local3/bias:0"]

-Define a function that look something like:

def pool5_tofcX(input_tensor, layer_number=3):
  flatten=tf.reshape(input_tensor,(-1,7*7*512))
  tmp=flatten
  for i in xrange(layer_number):
    tmp=tf.matmul(tmp, graph.get_tensor_by_name(weights_name[i]))
    tmp=tf.nn.bias_add(tmp, graph.get_tensor_by_name(biases_name[i]))
    tmp=tf.nn.relu(tmp)
  return tmp

Then define the tensor using the function:

wanted_output=pool5_tofcX(out_pool)

Then you are done !

这篇关于Tensorflow:如何将自定义输入插入现有图形?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-21 06:08