本文介绍了对 scikit learn 决策树中的 random_state 感到困惑的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

random_state 参数感到困惑,不知道为什么决策树训练需要一些随机性.我的想法,(1)它与随机森林有关吗?(2) 是否与拆分训练测试数据集有关?如果是这样,为什么不直接使用训练测试拆分方法(http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.train_test_split.html)?

http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html

>>>从 sklearn.datasets 导入 load_iris>>>从 sklearn.cross_validation 导入 cross_val_score>>>从 sklearn.tree 导入 DecisionTreeClassifier>>>clf = DecisionTreeClassifier(random_state=0)>>>虹膜 = load_iris()>>>cross_val_score(clf, iris.data, iris.target, cv=10)......数组([ 1. , 0.93..., 0.86..., 0.93..., 0.93...,0.93..., 0.93..., 1. , 0.93..., 1. ])

问候,林

解决方案

这在 文档

在最优性的几个方面,甚至对于简单的概念,学习最优决策树的问题已知是 NP 完全的.因此,实用的决策树学习算法基于启发式算法,例如贪心算法,其中在每个节点做出局部最优决策.这样的算法不能保证返回全局最优决策树.这可以通过在集成学习器中训练多棵树来缓解,其中特征和样本是随机采样的.

因此,基本上,使用随机选择的特征和样本(随机森林中使用的类似技术)重复次优贪婪算法多次.random_state 参数允许控制这些随机选择.

接口文档特别指出:>

如果是int,random_state是随机数生成器使用的种子;如果是 RandomState 实例,random_state 是随机数生成器;如果为 None,则随机数生成器是 np.random 使用的 RandomState 实例.

因此,在任何情况下都将使用随机算法.传递任何值(无论是特定的整数,例如 0,还是 RandomState 实例),都不会改变它.传入 int 值(0 或其他)的唯一理由是使结果在调用之间保持一致:如果您使用 random_state=0(或任何其他值)调用它,那么每次,你会得到同样的结果.

Confused about random_state parameter, not sure why decision tree training needs some randomness. My thoughts, (1) is it related to random forest? (2) is it related to split training testing data set? If so, why not use training testing split method directly (http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.train_test_split.html)?

http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html

>>> from sklearn.datasets import load_iris
>>> from sklearn.cross_validation import cross_val_score
>>> from sklearn.tree import DecisionTreeClassifier
>>> clf = DecisionTreeClassifier(random_state=0)
>>> iris = load_iris()
>>> cross_val_score(clf, iris.data, iris.target, cv=10)
...
...
array([ 1.     ,  0.93...,  0.86...,  0.93...,  0.93...,
        0.93...,  0.93...,  1.     ,  0.93...,  1.      ])

regards,Lin

解决方案

This is explained in the documentation

So, basically, a sub-optimal greedy algorithm is repeated a number of times using random selections of features and samples (a similar technique used in random forests). The random_state parameter allows controlling these random choices.

The interface documentation specifically states:

So, the random algorithm will be used in any case. Passing any value (whether a specific int, e.g., 0, or a RandomState instance), will not change that. The only rationale for passing in an int value (0 or otherwise) is to make the outcome consistent across calls: if you call this with random_state=0 (or any other value), then each and every time, you'll get the same result.

这篇关于对 scikit learn 决策树中的 random_state 感到困惑的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-13 19:35