decsion tree(决策树)
其中每个内部结点表示在一个属性上的测试,每个分支代表一个属性的输出,而每个树叶结点代表类或类的分布。树的最顶层是根节点
连续变量要离散化
机器学习中分类方法的一个重要算法
信息熵:
一个信息的信息量大小和它的不确定性有直接的关系,要搞清楚一件非常非常不确定的事情,或者是我么你一无所知的事情,需要了解大量新==》新的度量就等于不确定性的多少
变量的不确定性越大,熵也就越大
ID3
通过信息熵来选择每个节点的判断依据。
infomation gain最大则为当前节点的依据。
决策树的优点缺点
优点:直观,便于理解,小规模数据集有效
缺点:处理连续变量不好 类别较多时,错误增加比较快,可规模性一般
决策树程序
安装anaconda python环境
anaconda环境包含了机器学习的基本所有库安装graphviz
转化dot文件到pdf生成决策树图
进入到cmd中allEectronicInformationGainorc.dot所在文件夹
dot -Tpdf allEectronicInformationGainorc.dot -o outpu.pdf
- program
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import csv
import os
from sklearn import preprocessing
from sklearn.feature_extraction import DictVectorizer
from sklearn import tree
dataDir = os.path.dirname(__file__)
#载入数据并分割
allElectroncsData = open(dataDir+"/data/red.csv","r")
reader = csv.reader(allElectroncsData)
num =0
headers = []
for row in reader:
headers = row
if num == 0:
break
print(headers)
featureList = []
labelList = []
for row in reader:
labelList.append(row[-1])
rowDict={}
for i in range(1 , len(row)-1):
rowDict[headers[i]] = row[i]
featureList.append(rowDict)
print(labelList)
for feature in featureList:
print(feature)
#vectordic,向量化
vec = DictVectorizer()
dummyX = vec.fit_transform(featureList).toarray()
print(dummyX)
print(vec.get_feature_names())
#vectorize calss labels
lb = preprocessing.LabelBinarizer()
dummyY = lb.fit_transform(labelList)
print("dummyY:"+str(dummyY))
#using decision tree for classfication
clf = tree.DecisionTreeClassifier(criterion='entropy')##度量标准为entropy信息熵
clf = clf.fit(dummyX,dummyY)
print("clf"+str(clf))
#viuslize model,可视化
# with open("allEectronicInformationGainorc.dot", 'w') as f:
# f = tree.export_graphviz(clf, feature_names=vec.get_feature_names(), out_file=f)
#
# with open("hello.dot", "w") as f1:
# f1 = tree.export_graphviz(clf, feature_names=vec.get_feature_names(), out_file=f1)
#predic 预测
oneRowX = dummyX[0,:]
print("oneRowX:"+str(oneRowX))
newRowX = oneRowX
newRowX[0] =1
newRowX[2] =0
print("newRowX: "+str(newRowX))
predictedY = clf.predict(newRowX)
print("predictY: "+str(predictedY))