本文介绍了如何获得一个TensorFlow/Keras模型,该模型将图像作为输入以在Cloud ML Engine上提供预测?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

有多个问题(例如:, 2 3 4 5 6 等),以解决在Cloud ML Engine中为TensorFlow/Keras模型提供预测时如何处理图像数据的问题.

There are multiple questions (examples: 1, 2, 3, 4, 5, 6, etc.) trying to address the question of how to handle image data when serving predictions for TensorFlow/Keras models in Cloud ML Engine.

不幸的是,有些答案已经过时,并且没有一个能全面解决问题.这篇文章的目的是提供全面,最新的答案,以供将来参考.

Unfortunately, some of the answers are out-of-date and none of them comprehensively addresses the problem. The purpose of this post is to provide a comprehensive, up-to-date answer for future reference.

推荐答案

此答案将重点放在 Estimators ,这是用于编写TensorFlow代码的高级API,目前是推荐的方式.另外,Keras使用Estimators导出服务模型.

This answer is going to focus on Estimators, which are high-level APIs for writing TensorFlow code and currently the recommended way. In addition, Keras uses Estimators to export models for serving.

此答案将分为两个部分:

This answer is going to be divided into two parts:

  1. 如何编写input_fn.
  2. 在模型部署后用于发送请求的客户端代码.

如何编写input_fn

How to Write the input_fn

input_fn的确切详细信息将取决于您的独特要求.例如,您可以在客户端进行图像解码和调整大小,可以使用JPG vs. PNG,可以期望特定尺寸的图像,除了图像以外还可以有其他输入,等等.我们将重点关注一种通用的方法,该方法可以接受各种尺寸的各种图像格式.因此,以下通用代码应该相当容易适应任何更具体的情况.

The exact details of your input_fn will depend on your unique requirements. For instance, you may do image decoding and resizing client side, you might use JPG vs. PNG, you may expect a specific size of image, you may have additional inputs besides images, etc. We will focus on a fairly general approach that accepts various image formats at a variety of sizes. Thus, the following generic code should be fairly easily to adapt to any of the more specific scenarios.

HEIGHT = 199
WIDTH = 199
CHANNELS = 1

def serving_input_receiver_fn():

  def decode_and_resize(image_str_tensor):
     """Decodes jpeg string, resizes it and returns a uint8 tensor."""
     image = tf.image.decode_jpeg(image_str_tensor, channels=CHANNELS)
     image = tf.expand_dims(image, 0)
     image = tf.image.resize_bilinear(
         image, [HEIGHT, WIDTH], align_corners=False)
     image = tf.squeeze(image, squeeze_dims=[0])
     image = tf.cast(image, dtype=tf.uint8)
     return image

 # Optional; currently necessary for batch prediction.
 key_input = tf.placeholder(tf.string, shape=[None]) 
 key_output = tf.identity(key_input)

 input_ph = tf.placeholder(tf.string, shape=[None], name='image_binary')
 images_tensor = tf.map_fn(
      decode_and_resize, input_ph, back_prop=False, dtype=tf.uint8)
 images_tensor = tf.image.convert_image_dtype(images_tensor, dtype=tf.float32) 

 return tf.estimator.export.ServingInputReceiver(
     {'images': images_tensor},
     {'bytes': input_ph})

如果您已保存Keras模型并将其转换为SavedModel,请使用以下命令:

If you've saved out Keras model and would like to convert it to a SavedModel, use the following:

KERAS_MODEL_PATH='/path/to/model'
MODEL_DIR='/path/to/store/checkpoints'
EXPORT_PATH='/path/to/store/savedmodel'

# If you are invoking this from your training code, use `keras_model=model` instead.
estimator = keras.estimator.model_to_estimator(
    keras_model_path=KERAS_MODEL_PATH,
    model_dir=MODEL_DIR)
estimator.export_savedmodel(
    EXPORT_PATH,
    serving_input_receiver_fn=serving_input_receiver_fn) 

发送请求(客户代码)

发送到服务的请求的正文如下所示:

The body of the requests sent to service will look like the following:

{
  "instances": [
    {"bytes": {"b64": "<base64 encoded image>"}},  # image 1
    {"bytes": {"b64": "<base64 encoded image>"}}   # image 2 ...        
  ]
}

您可以在部署之前 在本地测试模型/请求,以加快调试过程.为此,我们将使用gcloud ml-engine local predict.但是,在执行此操作之前,请注意gclouds数据格式是与上面显示的请求正文略有不同的转换. gcloud将输入文件的每一行都视为一个实例/图像,然后从每一行构造JSON.因此,除了上面的请求之外,我们还有:

You can test your model / requests out locally before deploying to speed up the debugging process. For this, we'll use gcloud ml-engine local predict. However, before we do that, please note the gclouds data format is a slight transformation from the request body shown above. gcloud treats each line of the input file as an instance/image and then constructs the JSON from each line. So instead of the above request, we will instead have:

{"bytes": {"b64": "<base64 encoded image>"}}
{"bytes": {"b64": "<base64 encoded image>"}}

gcloud会将这个文件转换成上面的请求.这是一些示例Python代码,可以生成适合与gcloud一起使用的文件:

gcloud will transform this file into the request above. Here is some example Python code that can produce a file suitable for use with gcloud:

import base64
import sys

for filename in sys.argv[1:]:
  with open(filename, 'rb') as f:
    img_data = f.read()
    print('{"bytes": {"b64": "%s"}}' % (base64.b64encode(img_data),))

(我们将此文件称为to_instances.py)

要通过预测测试模型,请执行以下操作:

To test the model with predictions:

python to_instances.py img1.jpg img2.jpg > instances.json
gcloud ml-engine local predict --model-dir /path/to/model --json-instances=instances.json

完成调试后,我们可以使用gcloud ml-engine models creategcloud ml-engine versions create将模型部署到云中,如文档.

After we've finished debugging, we can deploy the model to the cloud using gcloud ml-engine models create and gcloud ml-engine versions create as described in the documentation.

这时,您可以使用所需的客户端将请求发送到服务上的模型.请注意,这将需要身份验证令牌.我们将研究各种语言的一些示例.在每种情况下,我们都将您的模型称为my_model.

At this point, you can use your desired client to send requests to your model on the service. Note, that this will require an authentication token. We'll examine a few examples in various languages. In each case, we'll assume your model is called my_model.

gcloud

这与local predict几乎相同:

python to_instances.py img1.jpg img2.jpg > instances.json
gcloud ml-engine predict --model my_model --json-instances=instances.json    

卷曲

我们需要像to_instances.py这样的脚本来转换图像;我们称之为to_payload.py:

We'll need a script like to_instances.py to convert images; let's call it to_payload.py:

import base64
import json 
import sys

instances = []
for filename in sys.argv[1:]:
  with open(filename, 'rb') as f:
    img_data = f.read()
    instances.append(base64.b64encode(img_data))
print(json.dumps({"instances": instances}))

python to_request.py img1.jpg img2.jpg > payload.json

curl -m 180 -X POST -v -k -H"Content-Type:application/json" \ -d @ payload.json \ -H授权:承载gcloud auth print-access-token" \ https://ml.googleapis.com/v1/projects/ $ {YOUR_PROJECT}/models/my_model:predict

curl -m 180 -X POST -v -k -H "Content-Type: application/json" \ -d @payload.json \ -H "Authorization: Bearer gcloud auth print-access-token" \https://ml.googleapis.com/v1/projects/${YOUR_PROJECT}/models/my_model:predict

Python

import base64
PROJECT = "my_project"
MODEL = "my_model"

img_data = ... # your client will have its own way to get image data.

# Create the ML Engine service object.
# To authenticate set the environment variable
# GOOGLE_APPLICATION_CREDENTIALS=<path_to_service_account_file>
service = googleapiclient.discovery.build('ml', 'v1')
name = 'projects/{}/models/{}'.format(PROJECT, MODEL)

response = service.projects().predict(
    name=name,
    body={'instances': [{'b64': base64.encode(img_data)}]}
).execute()

if 'error' in response:
    raise RuntimeError(response['error'])

return response['predictions']

Javascript/Java/C#

使用Javascript/Java/C#发送请求在其他地方( JavaScript Java C#),这些示例应为易于适应.

Sending requests in Javascript/Java/C# are covered elsewhere (Javascript, Java, C#, respectively) and those examples should be straightforward to adapt.

这篇关于如何获得一个TensorFlow/Keras模型,该模型将图像作为输入以在Cloud ML Engine上提供预测?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-24 12:05