问题描述
如许多 网络上的其他位置,向现有 DataFrame 添加新列并不简单.不幸的是,拥有此功能很重要(即使它在分布式环境中效率低下),尤其是在尝试使用 unionAll
连接两个 DataFrame
时.
As mentioned in many other locations on the web, adding a new column to an existing DataFrame is not straightforward. Unfortunately it is important to have this functionality (even though it is inefficient in a distributed environment) especially when trying to concatenate two DataFrame
s using unionAll
.
将 null
列添加到 DataFrame
以促进 unionAll
的最优雅的解决方法是什么?
What is the most elegant workaround for adding a null
column to a DataFrame
to facilitate a unionAll
?
我的版本是这样的:
from pyspark.sql.types import StringType
from pyspark.sql.functions import UserDefinedFunction
to_none = UserDefinedFunction(lambda x: None, StringType())
new_df = old_df.withColumn('new_column', to_none(df_old['any_col_from_old']))
推荐答案
这里你只需要一个文字和转换:
All you need here is a literal and cast:
from pyspark.sql.functions import lit
new_df = old_df.withColumn('new_column', lit(None).cast(StringType()))
完整示例:
df = sc.parallelize([row(1, "2"), row(2, "3")]).toDF()
df.printSchema()
## root
## |-- foo: long (nullable = true)
## |-- bar: string (nullable = true)
new_df = df.withColumn('new_column', lit(None).cast(StringType()))
new_df.printSchema()
## root
## |-- foo: long (nullable = true)
## |-- bar: string (nullable = true)
## |-- new_column: string (nullable = true)
new_df.show()
## +---+---+----------+
## |foo|bar|new_column|
## +---+---+----------+
## | 1| 2| null|
## | 2| 3| null|
## +---+---+----------+
可在此处找到 Scala 等效项:使用空/空字段值创建新数据框
A Scala equivalent can be found here: Create new Dataframe with empty/null field values
这篇关于向 Spark DataFrame 添加一个空列的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!