我正在尝试加载通过保存的tf-agents策略

try:
    PolicySaver(collect_policy).save(model_dir + 'collect_policy')
except TypeError:
    tf.saved_model.save(collect_policy, model_dir + 'collect_policy')

try/except块的简要说明:最初创建策略时,可以通过PolicySaver保存它,但是当我再次加载它以进行另一次训练时,它是SavedModel,因此不能由PolicySaver保存。

这似乎可以正常工作,但是现在我想将此策略用于自播放,因此我在AIPlayer类中使用self.policy = tf.saved_model.load(policy_path)加载了该策略。但是,当我尝试将其用于预测时,它不起作用。这是(测试)代码:
def decide(self, table):
    state = table.getState()
    timestep = ts.restart(np.array([table.getState()], dtype=np.float))
    prediction = self.policy.action(timestep)
    print(prediction)

传递给函数的table包含游戏的状态,并且ts.restart()函数是从我的自定义pyEnvironment复制而来的,因此时间步的构造与环境中的完全相同。但是,对于prediction=self.policy.action(timestep)行,我收到以下错误消息:
ValueError: Could not find matching function to call loaded from the SavedModel. Got:
  Positional arguments (2 total):
    * TimeStep(step_type=<tf.Tensor 'time_step:0' shape=() dtype=int32>, reward=<tf.Tensor 'time_step_1:0' shape=() dtype=float32>, discount=<tf.Tensor 'time_step_2:0' shape=() dtype=float32>, observation=<tf.Tensor 'time_step_3:0' shape=(1, 79) dtype=float64>)
    * ()
  Keyword arguments: {}

Expected these arguments to match one of the following 2 option(s):

Option 1:
  Positional arguments (2 total):
    * TimeStep(step_type=TensorSpec(shape=(None,), dtype=tf.int32, name='time_step/step_type'), reward=TensorSpec(shape=(None,), dtype=tf.float32, name='time_step/reward'), discount=TensorSpec(shape=(None,), dtype=tf.float32, name='time_step/discount'), observation=TensorSpec(shape=(None,
79), dtype=tf.float64, name='time_step/observation'))
    * ()
  Keyword arguments: {}

Option 2:
  Positional arguments (2 total):
    * TimeStep(step_type=TensorSpec(shape=(None,), dtype=tf.int32, name='step_type'), reward=TensorSpec(shape=(None,), dtype=tf.float32, name='reward'), discount=TensorSpec(shape=(None,), dtype=tf.float32, name='discount'), observation=TensorSpec(shape=(None, 79), dtype=tf.float64, name='observation'))
    * ()
  Keyword arguments: {}

我究竟做错了什么?真的只是张量名称还是形状问题?我该如何改变呢?

任何想法如何进一步调试对此表示赞赏。

最佳答案

我通过手动构造TimeStep使其工作:

    step_type = tf.convert_to_tensor(
        [0], dtype=tf.int32, name='step_type')
    reward = tf.convert_to_tensor(
        [0], dtype=tf.float32, name='reward')
    discount = tf.convert_to_tensor(
        [1], dtype=tf.float32, name='discount')
    observations = tf.convert_to_tensor(
        [state], dtype=tf.float64, name='observations')
    timestep = ts.TimeStep(step_type, reward, discount, observations)
    prediction = self.policy.action(timestep)

09-16 17:16