问题描述
对 Pytorch 模型执行超参数优化的最佳方法是什么?实施例如自己随机搜索?使用 Skicit 学习?或者还有什么我不知道的吗?
What is the best way to perform hyperparameter optimization for a Pytorch model? Implement e.g. Random Search myself? Use Skicit Learn? Or is there anything else I am not aware of?
推荐答案
许多研究人员使用 RayTune.这是一个可扩展的超参数调整框架,专门用于深度学习.您可以轻松地将它与任何深度学习框架(下面的 2 行代码)结合使用,它提供了最先进的算法,包括 HyperBand、基于人口的训练、贝叶斯优化和 BOHB.
Many researchers use RayTune. It's a scalable hyperparameter tuning framework, specifically for deep learning. You can easily use it with any deep learning framework (2 lines of code below), and it provides most state-of-the-art algorithms, including HyperBand, Population-based Training, Bayesian Optimization, and BOHB.
import torch.optim as optim
from ray import tune
from ray.tune.examples.mnist_pytorch import get_data_loaders, ConvNet, train, test
def train_mnist(config):
train_loader, test_loader = get_data_loaders()
model = ConvNet()
optimizer = optim.SGD(model.parameters(), lr=config["lr"])
for i in range(10):
train(model, optimizer, train_loader)
acc = test(model, test_loader)
tune.report(mean_accuracy=acc)
analysis = tune.run(
train_mnist, config={"lr": tune.grid_search([0.001, 0.01, 0.1])})
print("Best config: ", analysis.get_best_config(metric="mean_accuracy"))
# Get a dataframe for analyzing trial results.
df = analysis.dataframe()
[免责声明:我积极参与这个项目!]
[Disclaimer: I contribute actively to this project!]
这篇关于Pytorch 模型的超参数优化的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!