嘿,我正在尝试了解线性假设的算法。我无法弄清楚我的实现是否正确。我认为这是不正确的,但我无法弄清我想念的是什么。
theta0 = 1
theta1 = 1
alpha = 0.01
for i in range(0,le*10):
for j in range(0,le):
temp0 = theta0 - alpha * (theta1 * x[j] + theta0 - y[j])
temp1 = theta1 - alpha * (theta1 * x[j] + theta0 - y[j]) * x[j]
theta0 = temp0
theta1 = temp1
print ("Values of slope and y intercept derived using gradient descent ",theta1, theta0)
它为我提供了4级精度的正确答案。但是当我将其与网络上的其他程序进行比较时,我对此感到困惑。
提前致谢!
最佳答案
梯度下降算法的实现:
import numpy as np
cur_x = 1 # Initial value
gamma = 1e-2 # step size multiplier
precision = 1e-10
prev_step_size = cur_x
# test function
def foo_func(x):
y = (np.sin(x) + x**2)**2
return y
# Iteration loop until a certain error measure
# is smaller than a maximal error
while (prev_step_size > precision):
prev_x = cur_x
cur_x += -gamma * foo_func(prev_x)
prev_step_size = abs(cur_x - prev_x)
print("The local minimum occurs at %f" % cur_x)
关于python - python实现问题中的梯度下降,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/45768848/