我创建了一个深层的Q网络来玩蛇。该代码工作正常,除了在训练周期内性能并没有真正改善。最后,它与采取随机动作的代理几乎没有区别。这是训练代码:
def train(self):
self.build_model()
for episode in range(self.max_episodes):
self.current_episode = episode
env = SnakeEnv(self.screen)
episode_reward = 0
for timestep in range(self.max_steps):
env.render(self.screen)
state = env.get_state()
action = None
epsilon = self.current_eps
if epsilon > random.random():
action = np.random.choice(env.action_space) #explore
else:
values = self.policy_model.predict(env.get_state()) #exploit
action = np.argmax(values)
experience = env.step(action)
if(experience['done'] == True):
episode_reward += 5 * (len(env.snake.List) - 1)
episode_reward += experience['reward']
break
episode_reward += experience['reward']
if(len(self.memory) < self.memory_size):
self.memory.append(Experience(experience['state'], experience['action'], experience['reward'], experience['next_state']))
else:
self.memory[self.push_count % self.memory_size] = Experience(experience['state'], experience['action'], experience['reward'], experience['next_state'])
self.push_count += 1
self.decay_epsilon(episode)
if self.can_sample_memory():
memory_sample = self.sample_memory()
#q_pred = np.zeros((self.batch_size, 1))
#q_target = np.zeros((self.batch_size, 1))
#i = 0
for memory in memory_sample:
memstate = memory.state
action = memory.action
next_state = memory.next_state
reward = memory.reward
max_q = reward + self.discount_rate * self.replay_model.predict(next_state)
#q_pred[i] = q_value
#q_target[i] = max_q
#i += 1
self.policy_model.fit(memstate, max_q, epochs=1, verbose=0)
print("Episode: ", episode, " Total Reward: ", episode_reward)
if episode % self.target_update == 0:
self.replay_model.set_weights(self.policy_model.get_weights())
self.policy_model.save_weights('weights.hdf5')
pygame.quit()
这是超参数:
learning_rate = 0.5
discount_rate = 0.99
eps_start = 1
eps_end = .01
eps_decay = .001
memory_size = 100000
batch_size = 256
max_episodes = 1000
max_steps = 5000
target_update = 10
这是网络架构:
model = models.Sequential()
model.add(Dense(500, activation = 'relu', kernel_initializer = 'random_uniform', bias_initializer = 'zeros', input_dim = 400))
model.add(Dense(500, activation = 'relu', kernel_initializer = 'random_uniform', bias_initializer = 'zeros'))
model.add(Dense(5, activation = 'tanh', kernel_initializer = 'random_uniform', bias_initializer = 'zeros')) #tanh for last layer because q value can be > 1
model.compile(loss='mean_squared_error', optimizer = 'adam')
作为参考,网络会输出5个值,因为蛇可以移动4个方向,而另外1个则不执行任何操作。另外,我并没有像传统的DQN那样是游戏的屏幕截图,而是传入了400维向量来表示游戏发生在其中的20 x 20网格。代理向游戏越靠近游戏者会获得1的奖励。食物或食物,如果死亡,将获得-1的奖励。如何提高性能?
最佳答案
我认为主要问题是您的学习率高。尝试使用低于0.001的值。 Atari DQN使用0.00025。
还要将traget_update设置为高于10的值。例如500或更高。
要看到的东西,步骤数至少应为10000。
Lower_batch大小为32或64。
您是否考虑过进行其他改进?像PER,DQN对决?
检查一下:https://www.freecodecamp.org/news/improvements-in-deep-q-learning-dueling-double-dqn-prioritized-experience-replay-and-fixed-58b130cc5682/
也许您不想再次重新实现轮子,请考虑https://stable-baselines.readthedocs.io/en/master/
最后,您可以检查类似的项目:https://github.com/lukaskiss222/agarDQNbot