问题描述
我正在PySpark工作,我想找到一种对数据组执行线性回归的方法.专门针对此数据框
I'm working in PySpark, and I'd like to find a way to perform linear regressions on groups of data. Specifically given this dataframe
import pandas as pd
pdf = pd.DataFrame({'group_id':[1,1,1,2,2,2,3,3,3,3],
'x':[0,1,2,0,1,5,2,3,4,5],
'y':[2,1,0,0,0.5,2.5,3,4,5,6]})
df = sqlContext.createDataFrame(pdf)
df.show()
# +--------+-+---+
# |group_id|x| y|
# +--------+-+---+
# | 1|0|2.0|
# | 1|1|1.0|
# | 1|2|0.0|
# | 2|0|0.0|
# | 2|1|0.5|
# | 2|5|2.5|
# | 3|2|3.0|
# | 3|3|4.0|
# | 3|4|5.0|
# | 3|5|6.0|
# +--------+-+---+
我现在希望能够为每个group_id
拟合一个单独的y ~ ax + b
模型,并输出一个新的数据帧,其中包含列a
和b
以及每个组一行.
I'd now like to be able to fit a separate y ~ ax + b
model for each group_id
and output a new dataframe with columns a
and b
and a row for each group.
例如对于组1
,我可以这样做:
For instance for group 1
I could do:
from sklearn import linear_model
# Regression on group_id = 1
data = df.where(df.group_id == 1).toPandas()
regr = linear_model.LinearRegression()
regr.fit(data.x.values.reshape(len(data),1), data.y.reshape(len(data),1))
a = regr.coef_[0][0]
b = regr.intercept_[0]
print('For group 1, y = {0}*x + {1}'.format(a, b))
# Repeat for group_id=2, group_id=3
但是要为每个组执行此操作涉及将数据一对一地返回给驱动程序,这并没有利用任何Spark并行性.
But to do this for each group involves bringing the data back to the driver one be one, which doesn't take advantage of any Spark parallelism.
推荐答案
这是我找到的解决方案.不用对每组数据执行单独的回归,而是为每个组创建一个具有单独列的稀疏矩阵:
Here's a solution I found. Instead of performing separate regressions on each group of data, create one sparse matrix with separate columns for each group:
from pyspark.mllib.regression import LabeledPoint, SparseVector
# Label points for regression
def groupid_to_feature(group_id, x, num_groups):
intercept_id = num_groups + group_id-1
# Need a vector containing x and a '1' for the intercept term
return SparseVector(num_groups*2, {group_id-1: x, intercept_id: 1.0})
labelled = df.map(lambda line:LabeledPoint(line[2],
groupid_to_feature(line[0], line[1], 3)))
labelled.take(5)
# [LabeledPoint(2.0, (6,[0,3],[0.0,1.0])),
# LabeledPoint(1.0, (6,[0,3],[1.0,1.0])),
# LabeledPoint(0.0, (6,[0,3],[2.0,1.0])),
# LabeledPoint(0.0, (6,[1,4],[0.0,1.0])),
# LabeledPoint(0.5, (6,[1,4],[1.0,1.0]))]
然后使用Spark的LinearRegressionWithSGD
进行回归:
Then use Spark's LinearRegressionWithSGD
to run the regression:
from pyspark.mllib.regression import LinearRegressionModel, LinearRegressionWithSGD
lrm = LinearRegressionWithSGD.train(labelled, iterations=5000, intercept=False)
此回归的权重包含每个group_id
的系数和截距,即
The weights from this regression contain the coefficient and intercept for each group_id
, i.e.
lrm.weights
# DenseVector([-1.0, 0.5, 1.0014, 2.0, 0.0, 0.9946])
或重塑为DataFrame以为每个组提供a
和b
:
or reshaped into a DataFrame to give a
and b
for each group:
pd.DataFrame(lrm.weights.reshape(2,3).transpose(), columns=['a','b'], index=[1,2,3])
# a b
# 1 -0.999990 1.999986e+00
# 2 0.500000 5.270592e-11
# 3 1.001398 9.946426e-01
这篇关于Spark中的分组线性回归的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!