我想与Dask并行构建大量的sklearn管道。这是一种简单但幼稚的顺序方法:
from sklearn.naive_bayes import MultinomialNB
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import Pipeline
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
iris = load_iris()
X_train, X_test, Y_train, Y_test = train_test_split(iris.data, iris.target, test_size=0.2)
pipe_nb = Pipeline([('clf', MultinomialNB())])
pipe_lr = Pipeline([('clf', LogisticRegression())])
pipe_rf = Pipeline([('clf', RandomForestClassifier())])
pipelines = [pipe_nb, pipe_lr, pipe_rf] # In reality, this would include many more different types of models with varying but specific parameters
for pl in pipelines:
pl.fit(X_train, Y_train)
请注意,这不是GridSearchCV或RandomSearchCV问题
在RandomSearchCV的情况下,我知道如何将其与Dask并行化:
dask_client = Client('tcp://some.host.com:8786')
clf_rf = RandomForestClassifier()
param_dist = {'n_estimators': scipy.stats.randint(100, 500}
search_rf = RandomizedSearchCV(
clf_rf,
param_distributions=param_dist,
n_iter = 100,
scoring = 'f1',
cv=10,
error_score = 0,
verbose = 3,
)
with joblib.parallel_backend('dask'):
search_rf.fit(X_train, Y_train)
但是,我对超参数调整不感兴趣,也不清楚如何修改此代码,以使其与Dask并行地适应具有其自身特定参数的多个不同模型的集合。
最佳答案
dask.delayed
可能是这里最简单的解决方案。
from sklearn.naive_bayes import MultinomialNB
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import Pipeline
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
iris = load_iris()
X_train, X_test, Y_train, Y_test = train_test_split(iris.data, iris.target, test_size=0.2)
pipe_nb = Pipeline([('clf', MultinomialNB())])
pipe_lr = Pipeline([('clf', LogisticRegression())])
pipe_rf = Pipeline([('clf', RandomForestClassifier())])
pipelines = [pipe_nb, pipe_lr, pipe_rf] # In reality, this would include many more different types of models with varying but specific parameters
# Use dask.delayed instead of a for loop.
import dask.delayed
pipelines_ = [dask.delayed(pl).fit(X_train, Y_train) for pl in pipelines]
fit_pipelines = dask.compute(*pipelines_)
关于python - 使用Dask或Joblib进行并行Sklearn模型构建,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/54355236/