经过对Logistic回归理论的学习,推导出取对数后的似然函数为

Logistic回归,梯度上升算法理论详解和实现-LMLPHP

现在我们的目的是求一个向量Logistic回归,梯度上升算法理论详解和实现-LMLPHP,使得Logistic回归,梯度上升算法理论详解和实现-LMLPHP最大。其中

Logistic回归,梯度上升算法理论详解和实现-LMLPHP

Logistic回归,梯度上升算法理论详解和实现-LMLPHP

对这个似然函数求偏导后得到

Logistic回归,梯度上升算法理论详解和实现-LMLPHP

根据梯度上升算法有

Logistic回归,梯度上升算法理论详解和实现-LMLPHP

进一步得到

Logistic回归,梯度上升算法理论详解和实现-LMLPHP

我们可以初始化向量Logistic回归,梯度上升算法理论详解和实现-LMLPHP为0,或者随机值,然后进行迭代达到指定的精度为止。

 def sigmoid(inX):
return 1.0/(1+exp(-inX))
 def gradAscent(dataMatIn, classLabels):
dataMatrix = mat(dataMatIn) #convert to NumPy matrix
labelMat = mat(classLabels).transpose() #convert to NumPy matrix
m,n = shape(dataMatrix)
alpha = 0.001
maxCycles = 500
weights = ones((n,1))
for k in range(maxCycles): #heavy on matrix operations
h = sigmoid(dataMatrix*weights) #matrix mult
error = (labelMat - h) #vector subtraction
weights = weights + alpha * dataMatrix.transpose()* error #matrix mult
return weights
05-11 20:26