本文介绍了使用梯度下降实现线性回归的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我正尝试实现具有梯度下降的线性回归,如本文所述( https://towardsdatascience.com/linear-regression-using-gradient-descent-97a6c8700931 ).我遵循了实现的原则,但是经过几次迭代后,结果却溢出了.我正尝试大致获得此结果:y = -0.02x + 8499.6.
I'm trying to implement a linear regression with gradient descent as explained in this article (https://towardsdatascience.com/linear-regression-using-gradient-descent-97a6c8700931).I've followed to the letter the implementation, yet my results overflow after a few iterations.I'm trying to get this result approximately: y = -0.02x + 8499.6.
代码:
决定的更新的完全任意幅度,由学习率和梯度的乘积.这样的更新很有可能超出目标函数的最小值,即使每次迭代都以更高的幅度重复进行此更新.
- The gradients dm and dc in axes m and c are handled indepently from each other; m is updated in the descending direction according to dm, and c at the same time is updated in the descending direction according to dc — but, with certain curved surfaces z = f(m, c), the gradient in a direction between axes m and c can have the opposite sign compared to m and c on their own, so, while updating any one of m or c would converge, updating both moves away from the optimum.
- However, more likely the failure reason in this case of linear regression to a point cloud is the entirely arbitrary magnitude of the update to m and c, determined by the product of an obscure learning rate and the gradient. It is quite possible that such an update oversteps a minimum for the target function, even that this is repeated with higher amplitude in each iteration.
这篇关于使用梯度下降实现线性回归的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!