问题描述
鉴于张量流冻结推理图的输入,我想提取pbtxt文件。为此,我使用以下脚本:
import tensorflow as tf
#from google.protobuf import tensorflow.python.platform中的text_format
导入gfile
def转换器(文件名):
与gfile.FastGFile(filename,'rb')as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
tf.import_graph_def(graph_def,name ='')
tf.train.write_graph(graph_def,'pbtxt / ','protobuf.pbtxt',as_text = True)
print(graph_def)
return
#converter('ssd_mobilenet_v1_coco_2017_11_17 / frozen_inference_graph.pb')#此处您可以输入要转换的文件名
#,然后将在pbtxt目录中创建一个新文件。
转换器('ssd_mobilenet_v1_coco_2017_11_17 / frozen_inference_graph.pb')
例如,我正在使用ssd mobilenet体系结构。使用上面的代码,我得到的输出为pbtxt,但是我不能使用它。供参考,请参见下图
当我使用右边的官方pbtxt时,我会得到正确的结果。但是,当我使用上面的脚本生成的LEFT pbtxt时,我没有得到任何预测
我在开放式Cv DNN模块上使用了这些预测
tensorflowNet = cv2.dnn.readNetFromTensorflow('ssd_mobilenet_v1_coco_2017_11_17 / frozen_inference_graph.pb','pbtxt / protobuf.pbtxt')
如何将mobilenet冻结的推理图转换为正确的pbtxt格式,以便可以进行推理?
参考文献:
这就是您所需要的,现在复制冻结的推理图和新生成的pbtxt文件。并且,使用以下脚本通过OpenCV运行模型:
import cv2
#加载一个从Tensorflow
导入的模型tensorflowNet = cv2.dnn.readNetFromTensorflow('card_graph / frozen_inference_graph.pb','exported_pbtxt / output.pbtxt')
#输入图像
img = cv2 .imread('image.jpg')
行,列,通道= img.shape
#使用给定的图像作为输入,该图像必须是blob。
tensorflowNet.setInput(cv2.dnn.blobFromImage(img,size =(300,300),swapRB = True,crop = False))
#运行前向传递以计算网络输出
networkOutput = tensorflowNet.forward()
#在输出
上循环以在network中进行检测Output [0,0]:
分数=浮点数(检测[2])
,如果得分> 0.9:
左=检测[3] * cols
顶部=检测[4] *右
行=检测[5] * cols
底部=检测[ 6] *行
#在检测到的对象
周围绘制一个红色矩形cv2.rectangle(img,(int(left),int(top)),(int(right),int(底部)),(0,0,255),厚度= 2)
#显示带有检测对象周围的矩形的图像
cv2.imshow('Image',img)
cv2.waitKey()
cv2.destroyAllWindows()
I want to extract pbtxt file given an input of tensorflow frozen inference graph. In order to do this I am using the below script :
import tensorflow as tf
#from google.protobuf import text_format
from tensorflow.python.platform import gfile
def converter(filename):
with gfile.FastGFile(filename,'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
tf.import_graph_def(graph_def, name='')
tf.train.write_graph(graph_def, 'pbtxt/', 'protobuf.pbtxt', as_text=True)
print(graph_def)
return
#converter('ssd_mobilenet_v1_coco_2017_11_17/frozen_inference_graph.pb') # here you can write the name of the file to be converted
# and then a new file will be made in pbtxt directory.
converter('ssd_mobilenet_v1_coco_2017_11_17/frozen_inference_graph.pb')
As an example, I am using ssd mobilenet architecture. Using the above code I get the output as pbtxt but I cannot use it. For reference see the image below
When I use The official pbtxt on the RIGHT I get correct results. But, I do not get any prediction when I use LEFT pbtxt which I generated using above script
I am using these predictions on open cv DNN module
tensorflowNet = cv2.dnn.readNetFromTensorflow('ssd_mobilenet_v1_coco_2017_11_17/frozen_inference_graph.pb', 'pbtxt/protobuf.pbtxt')
How do I convert mobilenet frozen inference graph into proper pbtxt format so that I can get inference ?
References:https://gist.github.com/Arafatk/c063bddb9b8d17a037695d748db4f592
Heres what worked for me
- git clone https://github.com/opencv/opencv.git
- Navigate to opencv/samples/dnn/
- Copy frozen_inference_graph.pb, and *.config file corresponding to your pb file
- Paste the copied files in opencv/samples/dnn directory
- Make a new folder in the den directory and name it "exported_pbtxt"
And run this script:
python3 tf_text_graph_ssd.py --input frozen_inference_graph.pb --output exported_pbtxt/output.pbtxt --config pipeline.config
That’s all you need, now copy the frozen inference graph and newely generated pbtxt file. And, use the following script to run your model using OpenCV:
import cv2
# Load a model imported from Tensorflow
tensorflowNet = cv2.dnn.readNetFromTensorflow('card_graph/frozen_inference_graph.pb', 'exported_pbtxt/output.pbtxt')
# Input image
img = cv2.imread('image.jpg')
rows, cols, channels = img.shape
# Use the given image as input, which needs to be blob(s).
tensorflowNet.setInput(cv2.dnn.blobFromImage(img, size=(300, 300), swapRB=True, crop=False))
# Runs a forward pass to compute the net output
networkOutput = tensorflowNet.forward()
# Loop on the outputs
for detection in networkOutput[0,0]:
score = float(detection[2])
if score > 0.9:
left = detection[3] * cols
top = detection[4] * rows
right = detection[5] * cols
bottom = detection[6] * rows
#draw a red rectangle around detected objects
cv2.rectangle(img, (int(left), int(top)), (int(right), int(bottom)), (0, 0, 255), thickness=2)
# Show the image with a rectagle surrounding the detected objects
cv2.imshow('Image', img)
cv2.waitKey()
cv2.destroyAllWindows()
这篇关于无法将tensorflow冻结图转换为pbtxt文件的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!