我遵循了代码实验室TensorFlow For Poets来使用inception_v3进行迁移学习。它会生成retrained_graph.pb和retrained_labels.txt文件,这些文件可用于本地进行预测(运行label_image.py)。

然后,我想将此模型部署到Cloud ML Engine,以便可以进行在线预测。为此,我必须将retrained_graph.pb导出为SavedModel格式。我设法按照this answer from Google's @rhaertel80this python fileFlowers Cloud ML Engine Tutorial中的指示进行操作。这是我的代码:

import tensorflow as tf
from tensorflow.contrib import layers

from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import signature_def_utils
from tensorflow.python.saved_model import tag_constants
from tensorflow.python.saved_model import utils as saved_model_utils


export_dir = '../tf_files/saved7'
retrained_graph = '../tf_files/retrained_graph2.pb'
label_count = 5

def build_signature(inputs, outputs):
    signature_inputs = { key: saved_model_utils.build_tensor_info(tensor) for key, tensor in inputs.items() }
    signature_outputs = { key: saved_model_utils.build_tensor_info(tensor) for key, tensor in outputs.items() }

    signature_def = signature_def_utils.build_signature_def(
        signature_inputs,
        signature_outputs,
        signature_constants.PREDICT_METHOD_NAME
    )

    return signature_def

class GraphReferences(object):
  def __init__(self):
    self.examples = None
    self.train = None
    self.global_step = None
    self.metric_updates = []
    self.metric_values = []
    self.keys = None
    self.predictions = []
    self.input_jpeg = None

class Model(object):
    def __init__(self, label_count):
        self.label_count = label_count

    def build_image_str_tensor(self):
        image_str_tensor = tf.placeholder(tf.string, shape=[None])

        def decode_and_resize(image_str_tensor):
            return image_str_tensor

        image = tf.map_fn(
            decode_and_resize,
            image_str_tensor,
            back_prop=False,
            dtype=tf.string
        )

        return image_str_tensor

    def build_prediction_graph(self, g):
        tensors = GraphReferences()
        tensors.examples = tf.placeholder(tf.string, name='input', shape=(None,))
        tensors.input_jpeg = self.build_image_str_tensor()

        keys_placeholder = tf.placeholder(tf.string, shape=[None])
        inputs = {
            'key': keys_placeholder,
            'image_bytes': tensors.input_jpeg
        }

        keys = tf.identity(keys_placeholder)
        outputs = {
            'key': keys,
            'prediction': g.get_tensor_by_name('final_result:0')
        }

        return inputs, outputs

    def export(self, output_dir):
        with tf.Session(graph=tf.Graph()) as sess:
            with tf.gfile.GFile(retrained_graph, "rb") as f:
                graph_def = tf.GraphDef()
                graph_def.ParseFromString(f.read())
                tf.import_graph_def(graph_def, name="")

            g = tf.get_default_graph()
            inputs, outputs = self.build_prediction_graph(g)

            signature_def = build_signature(inputs=inputs, outputs=outputs)
            signature_def_map = {
                signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature_def
            }

            builder = saved_model_builder.SavedModelBuilder(output_dir)
            builder.add_meta_graph_and_variables(
                sess,
                tags=[tag_constants.SERVING],
                signature_def_map=signature_def_map
            )
            builder.save()

model = Model(label_count)
model.export(export_dir)


这段代码生成了saved_model.pb文件,然后我用它来创建Cloud ML Engine模型。我可以使用gcloud ml-engine predict --model my_model_name --json-instances request.json从此模型获得预测,其中request.json的内容为:

{ "key": "0", "image_bytes": { "b64": "jpeg_image_base64_encoded" } }


但是,无论我在请求中编码哪个jpeg,我总是得到完全相同的错误预测:

Prediction output

我猜想问题在于CloudML Prediction API将base64编码的图像字节传递给inception_v3的输入张量“ DecodeJpeg / contents:0”(在先前代码中为“ build_image_str_tensor()”方法)。关于如何解决此问题并使我的本地重新训练模型在Cloud ML Engine上提供正确预测的任何线索?

(只是为了清楚起见,问题不在retrained_graph.pb中,因为当我在本地运行它时它会做出正确的预测;在request.json中也不是,因为在跟随Flowers Cloud ML Engine时相同的请求文件可以正常工作上面指出的教程。)

最佳答案

首先,一般警告。 TensorFlow for Poets代码实验室的编写方式不适合生产服务(部分由您必须实施的变通方法来体现)。通常,您将导出不包含所有额外训练操作的特定于预测的图形。因此,尽管我们可以尝试一起破解某些可行的工具,但可能需要进行额外的工作才能生成此图。

代码的方法似乎是导入一个图形,添加一些占位符,然后导出结果。这通常很好。但是,在问题所示的代码中,您将添加输入占位符,而实际上并未将它们连接到导入图形中的任何内容。您最终得到了一个包含多个断开连接的子图的图,例如(原图):

image_str_tensor [input=image_bytes] -> <nothing>
keys_placeholder [input=key]  -> identity [output=key]
inception_subgraph -> final_graph [output=prediction]


inception_subgraph是指您要导入的所有操作。

因此,image_bytes实际上是无操作的并且被忽略; key通过; prediction包含运行inception_subgraph的结果;由于未使用您传递的输入,因此每次都会返回相同的结果(尽管我承认我实际上在这里期望出现错误)。

为了解决这个问题,我们需要将您创建的占位符连接到inception_subgraph中已经存在的占位符,以创建一个或多或少的图形,如下所示:

image_str_tensor [input=image_bytes] -> inception_subgraph -> final_graph [output=prediction]
keys_placeholder [input=key]  -> identity [output=key]


请注意,根据预测服务的要求,image_str_tensor将是一批图像,但是初始图的输入实际上是单个图像。为了简单起见,我们将以一种怪异的方式解决此问题:我们假设我们将一张一张地发送图像。如果每个请求发送多个图像,我们将收到错误消息。同样,批量预测将永远无法进行。

您需要进行的主要更改是import语句,该语句将我们添加的占位符连接到图形中的现有输入(您还将看到用于更改输入形状的代码):

放在一起,我们得到的是:

import tensorflow as tf
from tensorflow.contrib import layers

from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import signature_constants
from tensorflow.python.saved_model import signature_def_utils
from tensorflow.python.saved_model import tag_constants
from tensorflow.python.saved_model import utils as saved_model_utils


export_dir = '../tf_files/saved7'
retrained_graph = '../tf_files/retrained_graph2.pb'
label_count = 5

class Model(object):
    def __init__(self, label_count):
        self.label_count = label_count

    def build_prediction_graph(self, g):
        inputs = {
            'key': keys_placeholder,
            'image_bytes': tensors.input_jpeg
        }

        keys = tf.identity(keys_placeholder)
        outputs = {
            'key': keys,
            'prediction': g.get_tensor_by_name('final_result:0')
        }

        return inputs, outputs

    def export(self, output_dir):
        with tf.Session(graph=tf.Graph()) as sess:
            # This will be our input that accepts a batch of inputs
            image_bytes = tf.placeholder(tf.string, name='input', shape=(None,))
            # Force it to be a single input; will raise an error if we send a batch.
            coerced = tf.squeeze(image_bytes)
            # When we import the graph, we'll connect `coerced` to `DecodeJPGInput:0`
            input_map = {'DecodeJPGInput:0': coerced}

            with tf.gfile.GFile(retrained_graph, "rb") as f:
                graph_def = tf.GraphDef()
                graph_def.ParseFromString(f.read())
                tf.import_graph_def(graph_def, input_map=input_map, name="")

            keys_placeholder = tf.placeholder(tf.string, shape=[None])

            inputs = {'image_bytes': image_bytes, 'key': keys_placeholder}

            keys = tf.identity(keys_placeholder)
            outputs = {
                'key': keys,
                'prediction': tf.get_default_graph().get_tensor_by_name('final_result:0')}
            }

            tf.simple_save(sess, output_dir, inputs, outputs)

model = Model(label_count)
model.export(export_dir)

08-25 11:14
查看更多