结论: partial_fit确实不是不是批次GD,即不是计算梯度并更新每批次的重量,而是对每个样品进行此操作.在sklearn中似乎没有任何机制可以进行批次梯度下降.What is the way of implementing Batch gradient descent using sklearn for classification?We have SGDClassifier for Stochastic GD which will take single instance at a time and Linear/Logistic Regression which uses normal equation. 解决方案 The possible answer to the question as pointed out in the other similar question as well from sklearn docs:But is partial_fit really a batch gradient decent?SGD: The gradient of the cost function is calculated and the weights are updated using the gradient decent step for each sample.Batch/Mini Batch GD: The gradient of the cost function is calculated and the weights are updated using the gradient decent step once per batch.So Batch GD with batch size of 1 == SGD.Now that we are clear about definitions lets investigate the code of sklearn SGDClassifier.The docstring of partial_fit saysBut this is not a batch GD but it looks more like a helper function to run fit method with max_iter=1 (infact commented as same in docstrings).partial_fit calls _partial_fit with max_iter==1. Reference linkfit method calls _fit which calls _partial_fit with max_iter set to the assigned\default maximum iterations. Reference linkconclusion:partial_fit does not really do batch GD, i.e it is not calculating the gradients and updating the weight per batch but rather doing so for each sample.There seems to be no mechanism in sklearn to do batch gradient descend. 这篇关于批次梯度下降的Sklearn实现的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持! 上岸,阿里云! 08-28 21:44