我需要在不使用 scikit 的情况下在 python 中创建一个线性回归模型。
您可以忽略涉及输入的部分,因为该部分是根据给我的文件。我已经添加了我的整个代码,以防我做错了什么。
import pandas as pd
import numpy as np
import matplotlib.pyplot as mlt
from sklearn.cross_validation import train_test_split
data = pd.read_csv("housing.csv", delimiter = ' ', skipinitialspace = True, names = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE', 'DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV'])
df_x = data.drop('MEDV', axis = 1)
df_y = data['MEDV']
x_train, x_test, y_train, y_test = train_test_split(df_x.values, df_y.values, test_size = 0.2, random_state = 4)
theta = np.zeros((1, 13))
在上面的代码中,我刚刚接受了输入并创建了一个名为 theta 的参数数组。
def costfn(x, y, theta):
j = np.sum(x.dot(theta.T) - y) ** 2 / (2 * len(y))
return j
def gradient(x, y, theta, alpha, iterations):
cost_history = [0] * iterations
for i in range(iterations):
h = theta.dot(x.T) #hypothesis
loss = h - y
#print(loss)
g = loss.dot(x) / len(y)
#print(g)
theta = theta - alpha * g
cost_history[i] = costfn(x, y, theta)
#print(theta)
return theta, cost_history
theta, cost_history = gradient(x_train, y_train, theta, 0.001, 1000)
#print(theta)
我评论过的所有行都以适当大小的 nan 输出。
我使用了一种类似于使用 on this blog 的逻辑
如果我错了,请告诉我。
最佳答案
我认为总的来说你的代码是有效的。您观察到的很可能与您的 alpha 设置有关。它似乎太高了,所以 theta 发散了。在某些时候它会得到 inf
或 -inf
,之后,你会在下一次迭代中得到 NaN
。我认识到了同样的问题。
您可以使用简单的设置来验证:
# output theta in your function
def gradient(x, y, theta, alpha, iterations):
cost_history = [0] * iterations
for i in range(iterations):
h = theta.dot(x.T) #hypothesis
#print('h:', h)
loss = h - y
#print('loss:', loss)
g = loss.dot(x) / len(y)
#print('g:', g)
theta = theta - alpha * g
print('theta:', theta)
cost_history[i] = costfn(x, y, theta)
#print(theta)
return theta, cost_history
# set up example data with a simple linear relationship
# where we can play around with different numbers of parameters
# conveniently
# with some noise
num_params= 2 # how many params do you want to estimate (up to 5)
# take some fixed params (we only take num_params of them)
real_params= [2.3, -0.1, 8.5, -1.8, 3.2]
# now generate the data for the number of parameters chosen
x_train= np.random.randint(-100, 100, size=(80, num_params))
x_noise= np.random.randint(-100, 100, size=(80, num_params)) * 0.001
y_train= (x_train + x_noise).dot(np.array(real_params[:num_params]))
theta= np.zeros(num_params)
现在尝试使用高学习率
theta, cost_history = gradient(x_train, y_train, theta, 0.1, 1000)
您很可能会观察到,您的 theta 值的指数越来越高,直到它们最终达到
inf
或 -inf
。之后,您将获得 NaN
值。但是,如果将其设置为 0.00001 之类的低值,您会看到它收敛:
theta: [ 0.07734451 -0.00357339]
theta: [ 0.15208803 -0.007018 ]
theta: [ 0.22431803 -0.01033852]
theta: [ 0.29411905 -0.01353942]
theta: [ 0.36157275 -0.01662507]
theta: [ 0.42675808 -0.01959962]
theta: [ 0.48975132 -0.02246712]
theta: [ 0.55062617 -0.02523144]
...
theta: [ 2.29993382 -0.09981407]
theta: [ 2.29993382 -0.09981407]
theta: [ 2.29993382 -0.09981407]
theta: [ 2.29993382 -0.09981407]
这与实际参数
2.3
和 -0.1
非常接近。因此,您可以尝试使用适应学习率的代码,从而使值收敛得更快并且发散的风险更低。您还可以实现诸如提前停止之类的功能,以便在错误没有改变或更改低于阈值时停止对样本进行迭代。
例如。您可以对您的函数进行以下修改:
def gradient(
x,
y,
theta=None,
alpha=0.1,
alpha_factor=0.1 ** (1/5),
change_threshold=1e-10,
max_iterations=500,
verbose=False):
cost_history = list()
if theta is None:
# theta was not passed explicitely
# so initialize it
theta= np.zeros(x.shape[1])
last_loss_sum= float('inf')
len_y= len(y)
for i in range(1, max_iterations+1):
h = theta.dot(x.T) #hypothesis
loss = h - y
loss_sum= np.sum(np.abs(loss))
if last_loss_sum <= loss_sum:
# the loss didn't decrease
# so decrease alpha
alpha= alpha * alpha_factor
if verbose:
print(f'pass: {i:4d} loss: {loss_sum:.8f} / alpha: {alpha}')
theta_old= theta
g= loss.dot(x) / len_y
if loss_sum <= last_loss_sum and last_loss_sum < float('inf'):
# only apply the change if the loss is
# finite to avoid infinite entries in theta
theta = theta - alpha * g
theta_change= np.sum(np.abs(theta_old - theta))
if theta_change < change_threshold:
# Maybe this seems a bit awkward, but
# the comparison of change_threshold
# takes the relationship between theta and g
# into account. Note that g will not have
# an effect if theta is orders of magnitude
# larger than g, even if g itself is large.
# (I mean if you consider g and theta elementwise)
cost_history.append(costfn(x, y, theta))
break
cost_history.append(costfn(x, y, theta))
last_loss_sum= loss_sum
return theta, cost_history
这些变化解决了提前停止、自动调整
alpha
和避免 theta
取无限值的问题。您只需要在最小情况下传递 X
和 y
,所有其他参数都获得默认值。如果您想查看损失如何在每次迭代中减少,请设置 verbose=True
。关于python - 为什么我的数组值没有更新?线性回归,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/58152672/