问题描述
我已经使用 sagemaker 训练了一个语义分割模型,输出已保存到 s3 存储桶中.我想从 s3 加载这个模型来预测 sagemaker 中的一些图像.
I have trained a semantic segmentation model using the sagemaker and the out has been saved to a s3 bucket. I want to load this model from the s3 to predict some images in sagemaker.
我知道如何预测我是否在训练后让笔记本实例继续运行,因为它只是一个简单的部署,但如果我想使用旧模型并没有真正的帮助.
I know how to predict if I leave the notebook instance running after the training as its just an easy deploy but doesn't really help if I want to use an older model.
我已经查看了这些来源,并且能够自己想出一些东西,但它不起作用,因此我在这里:
I have looked at these sources and been able to come up with something myself but it doesn't work hence me being here:
https://course.fast.ai/deployment_amzn_sagemaker.html#deploy-致圣人https://aws.amazon.com/getting-started/tutorials/build-train-deploy-machine-learning-model-sagemaker/
https://sagemaker.readthedocs.io/en/stable/pipeline.html
我的代码是这样的:
from sagemaker.pipeline import PipelineModel
from sagemaker.model import Model
s3_model_bucket = 'bucket'
s3_model_key_prefix = 'prefix'
data = 's3://{}/{}/{}'.format(s3_model_bucket, s3_model_key_prefix, 'model.tar.gz')
models = ss_model.create_model() # ss_model is my sagemaker.estimator
model = PipelineModel(name=data, role=role, models= [models])
ss_predictor = model.deploy(initial_instance_count=1, instance_type='ml.c4.xlarge')
推荐答案
您实际上可以从现有工件中实例化 Python SDK model
对象,并将其部署到端点.这允许您从经过训练的工件部署模型,而无需在笔记本中重新训练.例如,对于语义分割模型:
You can actually instantiate a Python SDK model
object from existing artifacts, and deploy it to an endpoint. This allows you to deploy a model from trained artifacts, without having to retrain in the notebook. For example, for the semantic segmentation model:
trainedmodel = sagemaker.model.Model(
model_data='s3://...model path here../model.tar.gz',
image='685385470294.dkr.ecr.eu-west-1.amazonaws.com/semantic-segmentation:latest', # example path for the semantic segmentation in eu-west-1
role=role) # your role here; could be different name
trainedmodel.deploy(initial_instance_count=1, instance_type='ml.c4.xlarge')
同样,您可以使用以下命令从支持 SDK 的任何经过身份验证的客户端在部署的端点上实例化预测器对象:
And similarly, you can instantiate a predictor object on a deployed endpoint from any authenticated client supporting the SDK, with the following command:
predictor = sagemaker.predictor.RealTimePredictor(
endpoint='endpoint name here',
content_type='image/jpeg',
accept='image/png')
关于这些抽象的更多信息:
More on those abstractions:
模型
:https://sagemaker.readthedocs.io/en/stable/model.html预测器
:https://sagemaker.readthedocs.io/en/stable/predictors.html
这篇关于如何使用来自 s3 的预训练模型来预测一些数据?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!