问题描述
我正在用这个 [.config 文件][1] 训练 CNN:
rms_prop_optimizer: {学习率:{指数衰减学习率{初始学习率:0.004衰减步骤:800720衰减因子:0.95}}动量优化器值:0.9衰减:0.9ε:1.0}
}
如您所见,有一个 rms_prop 作为优化器.如果我想使用 Adam 怎么办?我应该如何编辑这个文件?
如果我是对的,您正在尝试将 object_detection 模型与 Tensorflow 提供的预训练网络一起使用,对吗?然后,如果你懂一点编程,你可以看看models/research/object_detection/builders/optimizer_builder.py,看看哪些是可以使用的优化器,以及哪些参数.相反,如果您只想要一个开箱即用的解决方案,我就是这样做的:
优化器{#momentum_optimizer {亚当_优化器:{学习率:{manual_step_learning_rate {initial_learning_rate: .0002日程 {步数:4500学习率:.0001}日程 {步数:7000学习率:.00008}日程 {步数:10000学习率:.00004}}}#momentum_optimizer_value: 0.9}use_moving_average: 假}
在我的(小)经验中,我注意到使用与 momentum_optimizer
相同的 learning_experience 会使学习速度过快和/或带来 NaN 损失,因此我通常将其减少 10 倍或更多.我现在正在尝试.:)
I'm training a CNN with this [.config file][1]:
if I'm right, you're trying to use the object_detection model with a pre-trained network offered by Tensorflow, am I right?Then, if you know a little of programming, you can take a look at models/research/object_detection/builders/optimizer_builder.py and see which are the optimizer that can be used and with which parameters.Instead if you just want a out-of-the-box solution, this is how I did:
optimizer {
# momentum_optimizer {
adam_optimizer: {
learning_rate: {
manual_step_learning_rate {
initial_learning_rate: .0002
schedule {
step: 4500
learning_rate: .0001
}
schedule {
step: 7000
learning_rate: .00008
}
schedule {
step: 10000
learning_rate: .00004
}
}
}
# momentum_optimizer_value: 0.9
}
use_moving_average: false
}
In my (little) experience I noticed that using the same learning_experience as momentum_optimizer
makes the learning too fast and/or brings to NaN Losses, so I usually decrease it of 10 times or more. I'm trying just now. :)
这篇关于Tensorflow 对象检测:使用 Adam 而不是 RMSProp的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!