本文介绍了Sklearn 如何使用 Joblib 或 Pickle 保存从管道和 GridSearchCV 创建的模型?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
在使用pipeline
和GridSearchCV
确定最佳参数后,我如何pickle
/joblib
这个过程以后再用?当它是单个分类器时,我知道如何执行此操作...
After identifying the best parameters using a pipeline
and GridSearchCV
, how do I pickle
/joblib
this process to re-use later? I see how to do this when it's a single classifier...
from sklearn.externals import joblib
joblib.dump(clf, 'filename.pkl')
但是如何在执行和完成 gridsearch
后使用最佳参数保存整个 pipeline
?
But how do I save this overall pipeline
with the best parameters after performing and completing a gridsearch
?
我试过了:
joblib.dump(grid, 'output.pkl')
- 但是这转储了每个 gridsearch尝试(许多文件)joblib.dump(pipeline, 'output.pkl')
- 但我不要认为包含最好的参数
joblib.dump(grid, 'output.pkl')
- But that dumped every gridsearchattempt (many files)joblib.dump(pipeline, 'output.pkl')
- But Idon't think that contains the best parameters
X_train = df['Keyword']
y_train = df['Ad Group']
pipeline = Pipeline([
('tfidf', TfidfVectorizer()),
('sgd', SGDClassifier())
])
parameters = {'tfidf__ngram_range': [(1, 1), (1, 2)],
'tfidf__use_idf': (True, False),
'tfidf__max_df': [0.25, 0.5, 0.75, 1.0],
'tfidf__max_features': [10, 50, 100, 250, 500, 1000, None],
'tfidf__stop_words': ('english', None),
'tfidf__smooth_idf': (True, False),
'tfidf__norm': ('l1', 'l2', None),
}
grid = GridSearchCV(pipeline, parameters, cv=2, verbose=1)
grid.fit(X_train, y_train)
#These were the best combination of tuning parameters discovered
##best_params = {'tfidf__max_features': None, 'tfidf__use_idf': False,
## 'tfidf__smooth_idf': False, 'tfidf__ngram_range': (1, 2),
## 'tfidf__max_df': 1.0, 'tfidf__stop_words': 'english',
## 'tfidf__norm': 'l2'}
推荐答案
import joblib
joblib.dump(grid.best_estimator_, 'filename.pkl')
如果要将对象转储到一个文件中 - 使用:
If you want to dump your object into one file - use:
joblib.dump(grid.best_estimator_, 'filename.pkl', compress = 1)
这篇关于Sklearn 如何使用 Joblib 或 Pickle 保存从管道和 GridSearchCV 创建的模型?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!