问题描述
我有一个预先训练的model.pkl文件,以及所有其他与ml模型有关的文件.我希望它将其部署在aws sagemaker上.但是无需培训,如何将其部署到aws sagmekaer,因为aws sagemaker中的fit()方法运行train命令并将model.tar.gz推到s3位置,并且当使用deploy方法时,它将使用相同的s3位置部署模型时,我们不会在s3中手动创建与aws模型创建的位置相同的位置,也不会使用一些时间戳来为其命名.如何在s3位置放置我们自己的个性化model.tar.gz文件,并使用相同的s3位置调用deploy()函数.
I have a model.pkl file which is pre-trained and all other files related to the ml model. I want it to deploy it on the aws sagemaker.But without training, how to deploy it to the aws sagmekaer, as fit() method in aws sagemaker run the train command and push the model.tar.gz to the s3 location and when deploy method is used it uses the same s3 location to deploy the model, we don't manual create the same location in s3 as it is created by the aws model and name it given by using some timestamp. How to put out our own personalized model.tar.gz file in the s3 location and call the deploy() function by using the same s3 location.
推荐答案
所有您需要的是:
- 将模型放置在
model.tar.gz
存档中的任意S3位置 - 在与SageMaker兼容的docker映像中具有推理脚本,该脚本能够读取您的
model.pkl
,为其提供服务并处理推理. - 创建一个将工件与推理代码相关联的端点
- to have your model in an arbitrary S3 location in a
model.tar.gz
archive - to have an inference script in a SageMaker-compatible docker image that is able to read your
model.pkl
, serve it and handle inferences. - to create an endpoint associating your artifact to your inference code
当您请求进行端点部署时,SageMaker将负责下载您的model.tar.gz
并将其解压缩到服务器的docker映像中的相应位置,该位置为/opt/ml/model
When you ask for an endpoint deployment, SageMaker will take care of downloading your model.tar.gz
and uncompressing to the appropriate location in the docker image of the server, which is /opt/ml/model
根据您使用的框架,您可以使用预先存在的docker映像(可用于Scikit-learn,TensorFlow,PyTorch,MXNet),或者您可能需要创建自己的Docker映像.
Depending on the framework you use, you may use either a pre-existing docker image (available for Scikit-learn, TensorFlow, PyTorch, MXNet) or you may need to create your own.
- 关于自定义图像的创建,请参见此处为规范,此处为 R 和 sklearn (由于已有预构建的docker镜像以及 sagemaker sklearn SDK )
- 关于利用Sklearn,PyTorch,MXNet,TF的现有容器,请检查以下示例: SageMaker Sklearn容器中的随机森林.在此示例中,没有什么可以阻止您部署在其他地方训练过的模型.请注意,在培训/部署环境不匹配的情况下,由于某些软件版本的差异,您可能会运行错误.
- Regarding custom image creation, see here the specification and here two examples of custom containers for R and sklearn (the sklearn one is less relevant now that there is a pre-built docker image along with a sagemaker sklearn SDK)
- Regarding leveraging existing containers for Sklearn, PyTorch, MXNet, TF, check this example: Random Forest in SageMaker Sklearn container. In this example, nothing prevents you from deploying a model that was trained elsewhere. Note that with a train/deploy environment mismatch you may run in errors due to some software version difference though.
关于您的以下经历:
我同意,有时使用 SageMaker Python SDK 的演示(许多可用的SageMaker SDK可能会误导您,因为它们经常利用这样的事实,即可以在同一会话中部署(c5)刚刚受过训练的Estimator
,而无需实例化该将推理代码映射到模型工件的中间模型概念.这种设计大概是代表代码兼容性来完成的,但是在现实生活中,给定模型的训练和部署很可能是通过在不同系统中运行的不同脚本来完成的.完全有可能在先前的同一会话中通过训练对模型进行部署,您需要实例化sagemaker.model.Model
对象,然后进行部署.
I agree that sometimes the demos that use the SageMaker Python SDK (one of the many available SDKs for SageMaker) may be misleading, in the sense that they often leverage the fact that an Estimator
that has just been trained can be deployed (Estimator.deploy(..)
) in the same session, without having to instantiate the intermediary model concept that maps inference code to model artifact. This design is presumably done on behalf of code compacity, but in real life, training and deployment of a given model may well be done from different scripts running in different systems. It's perfectly possible to deploy a model with training it previously in the same session, you need to instantiate a sagemaker.model.Model
object and then deploy it.
这篇关于如何在AWS sagemaker中运行预训练的模型?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!