本文介绍了Pyspark 合并数据帧内的 WrappedArrays的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

当前的 Pyspark 数据帧具有以下结构(col2 的 WrappedArrays 列表):

The current Pyspark dataframe has this structure (a list of WrappedArrays for col2):

+---+---------------------------------------------------------------------+
|id |col2                                                                 |
+---+---------------------------------------------------------------------+
|a  |[WrappedArray(code2), WrappedArray(code1, code3)]                    |
+---+---------------------------------------------------------------------+
|b  |[WrappedArray(code5), WrappedArray(code6, code8)]                    |
+---+---------------------------------------------------------------------+

这是我想要的结构(col2 的扁平列表):

This is the structure I would like to have (a flattened list for col2):

+---+---------------------------------------------------------------------+
|id |col2                                                                 |
+---+---------------------------------------------------------------------+
|a  |[code2,code1, code3)]                                                |
+---+---------------------------------------------------------------------+
|b  |[code5,code6, code8]                                                 |
+---+---------------------------------------------------------------------+

但我不确定如何进行这种转换.我曾尝试制作平面地图,但这似乎不起作用.有什么建议吗?

but I'm not sure how to do that transformation. I had tried to do a flatmap but that didn't seem to work. Any suggestions?

推荐答案

您可以使用 udf 和 rdd 两种方法来完成此操作.这是示例:-

You can do this using 2 ways, udf and rdd. Here is example:-

df = sqlContext.createDataFrame([
    ['a',  [['code2'],['code1', 'code3']]],
    ['b',  [['code5','code6'], ['code8']]]
], ["id", "col2"])
df.show(truncate = False)
+---+-------------------------------------------------+
|id |col2                                             |
+---+-------------------------------------------------+
|a  |[WrappedArray(code2), WrappedArray(code1, code3)]|
|b  |[WrappedArray(code5, code6), WrappedArray(code8)]|
+---+-------------------------------------------------+

RDD:-

df.map(lambda row:(row[0], reduce(lambda x,y:x+y, row[1]))).toDF().show(truncate=False)
+---+---------------------+
|_1 |_2                   |
+---+---------------------+
|a  |[code2, code1, code3]|
|b  |[code5, code6, code8]|
+---+---------------------+

UDF:-

from pyspark.sql import functions as F
import pyspark.sql.types as T
def fudf(val):
    #emlist = []
    #for item in val:
    #    emlist += item
    #return emlist
    return reduce (lambda x, y:x+y, val)
flattenUdf = F.udf(fudf, T.ArrayType(T.StringType()))
df.select("id", flattenUdf("col2").alias("col2")).show(truncate=False)
+---+---------------------+
|id |col2                 |
+---+---------------------+
|a  |[code2, code1, code3]|
|b  |[code5, code6, code8]|
+---+---------------------+

这篇关于Pyspark 合并数据帧内的 WrappedArrays的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

06-24 17:54