问题描述
我正在尝试使用以下代码在 tensorflow_model_server
中添加新模型:
I'm trying to add a new model in tensorflow_model_server
using the following code:
from tensorflow_serving.apis import model_service_pb2_grpc
from tensorflow_serving.apis import model_management_pb2
from tensorflow_serving.config import model_server_config_pb2
import grpc
def add_model_config(host, name, base_path, model_platform):
channel = grpc.insecure_channel(host)
stub = model_service_pb2_grpc.ModelServiceStub(channel)
request = model_management_pb2.ReloadConfigRequest()
model_server_config = model_server_config_pb2.ModelServerConfig()
#Create a config to add to the list of served models
config_list = model_server_config_pb2.ModelConfigList()
one_config = config_list.config.add()
one_config.name= name
one_config.base_path=base_path
one_config.model_platform=model_platform
model_server_config.model_config_list.CopyFrom(config_list)
request.config.CopyFrom(model_server_config)
print(request.IsInitialized())
print(request.ListFields())
response = stub.HandleReloadConfigRequest(request,10)
if response.status.error_code == 0:
print("Reload sucessfully")
return True
else:
print("Reload failed!")
print(response.status.error_code)
print(response.status.error_message)
return False
但是,每当我尝试执行它时,服务器就会卸载所有先前的模型.是否可以继续为所有模型提供服务,然后添加新模型?
But whenever I try to execute it the server unloads all of the previous models. Is it possible to keep serving all the models and then add a new model ?
推荐答案
没有简单的方法来添加新模型,同时保持先前模型已加载.
There is no easy way to just add a new model while keeping the previous ones already loaded.
我一直在做的是始终将最后一个模型配置列表保留在磁盘上,当出于任何原因(添加,删除或更新)需要刷新模型时,我都会从磁盘读取该配置文件,然后执行进行适当的修改,然后使用完整的配置列表调用HandleReloadConfigRequest(),然后再次将其保存到磁盘.
What I have been doing is to always keep the last model config list on disk, and when I need to refresh the models for any reason (either add or remove or update), I read that config file from disk, and do the proper modification, and call the HandleReloadConfigRequest() with the full config list, and then save it to disk again.
磁盘上的文件(例如/models/models.config
)成为在任何给定时间已将哪些模型加载到tf.serve中的权威记录.这样,您就可以从tf.serve重新启动中恢复,并且可以轻松地知道它将加载正确的模型.在服务器启动期间用于指定配置文件的选项为-model_config_file/models/models.config
.
The file on disk (say /models/models.config
) becomes the authoritative record of what models have been loaded into tf.serve at any given time. This way you can recover from a tf.serve reboot and have the comfort knowing that it will load the correct models. The option for specifying the config file during server start is --model_config_file /models/models.config
.
这篇关于如何在运行时在tf_serving中添加多个模型而不卸载先前的模型?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!