所以我试图做一些统计分析,而我所做的总和与stdev有所不同。
Sum可以像这样正常工作:
stats[0] = myData2.map(lambda (Column, values): (sum(values))).collect()
Stdev的格式不同,无法正常工作:
stats[4] = myData2.map(lambda (Column, values): (values)).stdev()
我收到以下错误:
TypeError: unsupported operand type(s) for -: 'ResultIterable' and 'float'
最佳答案
第一种解决方案使用NumPy
data=[(1,[1,2,3,4,5]),(2,[6,7,8,9]),(3,[1,3,5,7])]
dataRdd = sc.parallelize(data)
import numpy
dataRdd.mapValues(lambda values: numpy.std(values)).collect()
# Result
# [(1, 1.4142135623730951), (2, 1.1180339887498949), (3, 2.2360679774997898)]
第二种解决方案,自己动手做,更分散
data = [(1, 1), (1, 2), (1, 3), (1, 4), (1, 5), (2, 6), (2, 7), (2, 8), (2, 9), (3, 1), (3, 3), (3, 5), (3, 7)]
# Generate RDD of (Key, (Sum, Sum of squares, Count))
dataSumsRdd = dataRdd.aggregateByKey((0.0, 0.0, 0.0),
lambda (sum, sum2, count), value: (sum + float(value), sum2 + float(value**2), count+1.0),
lambda (suma, sum2a, counta), (sumb, sum2b, countb): (suma + sumb, sum2a + sum2b, counta + countb))
# Generate RDD of (Key, (Count, Average, Std Dev))
import math
dataStatsRdd = dataSumsRdd.mapValues(lambda (sum, sum2, count) : (count, sum/count, math.sqrt(sum2/count - (sum/count)**2)))
# Result
# [(1, (5.0, 3.0, 1.4142135623730951)), (2, (4.0, 7.5, 1.118033988749895)), (3, (4.0, 4.0, 2.23606797749979))]
关于python - Spark .stdev()Python问题,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/28812912/