本文介绍了将 PySpark DataFrame ArrayType 字段合并为单个 ArrayType 字段的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我有一个带有 2 个 ArrayType 字段的 PySpark DataFrame:
I have a PySpark DataFrame with 2 ArrayType fields:
>>>df
DataFrame[id: string, tokens: array<string>, bigrams: array<string>]
>>>df.take(1)
[Row(id='ID1', tokens=['one', 'two', 'two'], bigrams=['one two', 'two two'])]
我想将它们组合成一个 ArrayType 字段:
I would like to combine them into a single ArrayType field:
>>>df2
DataFrame[id: string, tokens_bigrams: array<string>]
>>>df2.take(1)
[Row(id='ID1', tokens_bigrams=['one', 'two', 'two', 'one two', 'two two'])]
处理字符串的语法在这里似乎不起作用:
The syntax that works with strings does not seem to work here:
df2 = df.withColumn('tokens_bigrams', df.tokens + df.bigrams)
谢谢!
推荐答案
Spark >= 2.4
您可以使用 concat
函数(SPARK-23736):
You can use concat
function (SPARK-23736):
from pyspark.sql.functions import col, concat
df.select(concat(col("tokens"), col("tokens_bigrams"))).show(truncate=False)
# +---------------------------------+
# |concat(tokens, tokens_bigrams) |
# +---------------------------------+
# |[one, two, two, one two, two two]|
# |null |
# +---------------------------------+
要在其中一个值为 NULL
时保留数据,您可以 coalesce
与 array
:
To keep data when one of the values is NULL
you can coalesce
with array
:
from pyspark.sql.functions import array, coalesce
df.select(concat(
coalesce(col("tokens"), array()),
coalesce(col("tokens_bigrams"), array())
)).show(truncate = False)
# +--------------------------------------------------------------------+
# |concat(coalesce(tokens, array()), coalesce(tokens_bigrams, array()))|
# +--------------------------------------------------------------------+
# |[one, two, two, one two, two two] |
# |[three] |
# +--------------------------------------------------------------------+
火花
不幸的是,在一般情况下连接 array
列你需要一个 UDF,例如这样:
Unfortunately to concatenate array
columns in general case you'll need an UDF, for example like this:
from itertools import chain
from pyspark.sql.functions import col, udf
from pyspark.sql.types import *
def concat(type):
def concat_(*args):
return list(chain.from_iterable((arg if arg else [] for arg in args)))
return udf(concat_, ArrayType(type))
可以用作:
df = spark.createDataFrame(
[(["one", "two", "two"], ["one two", "two two"]), (["three"], None)],
("tokens", "tokens_bigrams")
)
concat_string_arrays = concat(StringType())
df.select(concat_string_arrays("tokens", "tokens_bigrams")).show(truncate=False)
# +---------------------------------+
# |concat_(tokens, tokens_bigrams) |
# +---------------------------------+
# |[one, two, two, one two, two two]|
# |[three] |
# +---------------------------------+
这篇关于将 PySpark DataFrame ArrayType 字段合并为单个 ArrayType 字段的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!