我一直在尝试通过使用openai稳定基线算法对https://github.com/eivindeb/fixed-wing-gym中的固定翼无人机使用定制的openai健身室环境来固定翼无人机,但是现在我已经遇到了好几天了。我的基线是CartPole示例“多处理:从https://stable-baselines.readthedocs.io/en/master/guide/examples.html#multiprocessing-unleashing-the-power-of-vectorized-environments释放矢量化环境的力量”,因为我需要提供参数,并且我正在尝试使用多处理,我相信这个例子就是我所需要的。

我修改了基线示例,如下所示:

import gym
import numpy as np

from stable_baselines.common.policies import MlpPolicy
from stable_baselines.common.vec_env import SubprocVecEnv
from stable_baselines.common import set_global_seeds
from stable_baselines import ACKTR, PPO2
from gym_fixed_wing.fixed_wing import FixedWingAircraft


def make_env(env_id, rank, seed=0):
    """
    Utility function for multiprocessed env.

    :param env_id: (str) the environment ID
    :param num_env: (int) the number of environments you wish to have in subprocesses
    :param seed: (int) the inital seed for RNG
    :param rank: (int) index of the subprocess
    """

    def _init():
        env = FixedWingAircraft("fixed_wing_config.json")
        #env = gym.make(env_id)
        env.seed(seed + rank)
        return env

    set_global_seeds(seed)
    return _init

if __name__ == '__main__':
    env_id = "fixed_wing"
    #env_id = "CartPole-v1"
    num_cpu = 4  # Number of processes to use
    # Create the vectorized environment
    env = SubprocVecEnv([lambda: FixedWingAircraft for i in range(num_cpu)])
    #env = SubprocVecEnv([make_env(env_id, i) for i in range(num_cpu)])

    model = PPO2(MlpPolicy, env, verbose=1)
    model.learn(total_timesteps=25000)

    obs = env.reset()
    for _ in range(1000):
        action, _states = model.predict(obs)
        obs, rewards, dones, info = env.step(action)
        env.render()


我不断得到的错误如下:

Traceback (most recent call last):
  File "/home/bonie/PycharmProjects/deepRL_fixedwing/fixed-wing-gym/gym_fixed_wing/ACKTR_fixedwing.py", line 38, in <module>
    model = PPO2(MlpPolicy, env, verbose=1)
  File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/ppo2/ppo2.py", line 104, in __init__
    self.setup_model()
  File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/ppo2/ppo2.py", line 134, in setup_model
    n_batch_step, reuse=False, **self.policy_kwargs)
  File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/common/policies.py", line 660, in __init__
    feature_extraction="mlp", **_kwargs)
  File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/common/policies.py", line 540, in __init__
    scale=(feature_extraction == "cnn"))
  File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/common/policies.py", line 221, in __init__
    scale=scale)
  File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/common/policies.py", line 117, in __init__
    self._obs_ph, self._processed_obs = observation_input(ob_space, n_batch, scale=scale)
  File "/home/bonie/PycharmProjects/deepRL_fixedwing/stable-baselines/stable_baselines/common/input.py", line 51, in observation_input
    type(ob_space).__name__))
NotImplementedError: Error: the model does not support input space of type NoneType


我不确定要真正输入env_iddef make_env(env_id, rank, seed=0)函数的内容。我还认为并行进程的VecEnv功能未正确设置。

我正在Ubuntu 18.04中使用PyCharm IDE使用Python v3.6进行编码。

任何建议在这一点上确实会有所帮助!

谢谢。

最佳答案

您已经创建了一个自定义环境,但是没有在openai gym界面中注册它。这就是env_id所指的。可以通过调用它们的注册名称来设置gym中的所有环境。

因此,基本上,您需要做的是按照设置说明here并创建适当的__init__.pysetup.py脚本,并遵循相同的文件结构。

最后,在您的环境目录中使用pip install -e .在本地安装软件包。

关于python - 如何使用具有Openai稳定基线RL算法的自定义Openai体育馆环境?,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/58941164/

10-10 14:54