我有一个拥有1亿行和5000+列的DF。我试图在colx和剩余的5000+列之间找到corr。
aggList1 = [mean(col).alias(col + '_m') for col in df.columns] #exclude keys
df21= df.groupBy('key1', 'key2', 'key3', 'key4').agg(*aggList1)
df = df.join(broadcast(df21),['key1', 'key2', 'key3', 'key4']))
df= df.select([func.round((func.col(colmd) - func.col(colmd + '_m')), 8).alias(colmd)\
for colmd in all5Kcolumns])
aggCols= [corr(colx, col).alias(col) for col in colsall5K]
df2 = df.groupBy('key1', 'key2', 'key3').agg(*aggCols)
目前,由于spark 64KB的代码生成问题(甚至是spark 2.2),它无法正常工作。因此,我为每300列进行循环,并在最后合并所有内容。但是,在具有40个节点(每个节点10个核心,每个节点具有100GB)的集群中,这花费了30多个小时。有什么帮助调整吗?
下面的事情已经尝试过
-将DF重新分区为10,000
-每个循环中的检查点
-在每个循环中缓存
最佳答案
您可以尝试使用一些NumPy和RDD。首先是一堆进口商品:
from operator import itemgetter
import numpy as np
from pyspark.statcounter import StatCounter
让我们定义一些变量:
keys = ["key1", "key2", "key3"] # list of key column names
xs = ["x1", "x2", "x3"] # list of column names to compare
y = "y" # name of the reference column
和一些助手:
def as_pair(keys, y, xs):
""" Given key names, y name, and xs names
return a tuple of key, array-of-values"""
key = itemgetter(*keys)
value = itemgetter(y, * xs) # Python 3 syntax
def as_pair_(row):
return key(row), np.array(value(row))
return as_pair_
def init(x):
""" Init function for combineByKey
Initialize new StatCounter and merge first value"""
return StatCounter().merge(x)
def center(means):
"""Center a row value given a
dictionary of mean arrays
"""
def center_(row):
key, value = row
return key, value - means[key]
return center_
def prod(arr):
return arr[0] * arr[1:]
def corr(stddev_prods):
"""Scale the row to get 1 stddev
given a dictionary of stddevs
"""
def corr_(row):
key, value = row
return key, value / stddev_prods[key]
return corr_
并将
DataFrame
转换为成对的RDD
:pairs = df.rdd.map(as_pair(keys, y, xs))
接下来,让我们计算每个组的统计信息:
stats = (pairs
.combineByKey(init, StatCounter.merge, StatCounter.mergeStats)
.collectAsMap())
means = {k: v.mean() for k, v in stats.items()}
注意:使用5000个功能和7000个组,在内存中保留此结构应该没有问题。对于较大的数据集,您可能必须使用RDD和
join
,但这会比较慢。居中数据:
centered = pairs.map(center(means))
计算协方差:
covariance = (centered
.mapValues(prod)
.combineByKey(init, StatCounter.merge, StatCounter.mergeStats)
.mapValues(StatCounter.mean))
最后是相关性:
stddev_prods = {k: prod(v.stdev()) for k, v in stats.items()}
correlations = covariance.map(corr(stddev_prods))
示例数据:
df = sc.parallelize([
("a", "b", "c", 0.5, 0.5, 0.3, 1.0),
("a", "b", "c", 0.8, 0.8, 0.9, -2.0),
("a", "b", "c", 1.5, 1.5, 2.9, 3.6),
("d", "e", "f", -3.0, 4.0, 5.0, -10.0),
("d", "e", "f", 15.0, -1.0, -5.0, 10.0),
]).toDF(["key1", "key2", "key3", "y", "x1", "x2", "x3"])
结果为
DataFrame
:df.groupBy(*keys).agg(*[corr(y, x) for x in xs]).show()
+----+----+----+-----------+------------------+------------------+
|key1|key2|key3|corr(y, x1)| corr(y, x2)| corr(y, x3)|
+----+----+----+-----------+------------------+------------------+
| d| e| f| -1.0| -1.0| 1.0|
| a| b| c| 1.0|0.9972300220940342|0.6513360726920862|
+----+----+----+-----------+------------------+------------------+
以及上面提供的方法:
correlations.collect()
[(('a', 'b', 'c'), array([ 1. , 0.99723002, 0.65133607])),
(('d', 'e', 'f'), array([-1., -1., 1.]))]
该解决方案虽然有点涉及,但它具有很大的弹性,可以轻松调整以处理不同的数据分布。 JIT还应该有可能进一步提振。
关于python-3.x - DF中每个组的pyspark corr(超过5K列),我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/42240631/